Refine
Year of publication
- 2018 (75) (remove)
Document Type
- Article (59)
- Other (6)
- Doctoral Thesis (5)
- Monograph/Edited Volume (2)
- Postprint (2)
- Review (1)
Language
- English (75) (remove)
Is part of the Bibliography
- yes (75)
Keywords
- German (7)
- eye-tracking (4)
- word order (4)
- Aphasia (3)
- Bayesian data analysis (3)
- Cue-based retrieval (3)
- Working memory (3)
- Bayesian hierarchical modeling (2)
- ERP (2)
- Focus (2)
Institute
- Department Linguistik (75) (remove)
Wird Schon Stimmen!
(2018)
The article puts forward a novel analysis of the German modal particle schon as a modal degree operator over propositional content. The proposed analysis offers a uniform perspective on the semantics of modal schon and its aspectual counterpart meaning ‘already’: Both particles are analyzed as denoting a degree operator, expressing a scale-based comparison over relevant alternatives. The alternatives are determined by focus in the case of aspectual schon (Krifka 2000), but are restricted to the polar alternatives p and ¬p in the case of modal schon. Semantically, modal schon introduces a presupposition to the effect that the circumstantial conversational background contains more factual evidence in favor of p than in favor of ¬p, thereby making modal schon the not at-issue counterpart of the overt comparative form eher ‘rather’ (Herburger & Rubinstein 2014). The analysis incorporates basic insights from earlier analyses of modal schon in a novel way, and it also offers new insights as to the underlying workings of modality in natural language as involving propositions rather than possible worlds (Kratzer 1977, 2012).
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
Recent treatment protocols have been successful in improving working memory (WM) in individuals with aphasia. However, the evidence to date is small and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood. To address these issues, we conducted a multiple baseline study with three German-speaking individuals with chronic post stroke aphasia. Participants practised two computerised WM tasks (n-back with pictures and aback with spoken words) four times a week for a month, targeting two WM processes: updating WM representations and resolving interference. All participants showed improvement on at least one measure of spoken sentence comprehension and everyday memory activities. Two of them showed improvement also on measures of WM and functional communication. Our results suggest that WM can be improved through computerised training in chronic aphasia and this can transfer to spoken sentence comprehension and functional communication in some individuals.
Spectral change and duration as cues in Australian English listeners' front vowel categorization
(2018)
Australian English /iː/, /ɪ/, and /ɪə/ exhibit almost identical average first (F1) and second (F2) formant frequencies and differ in duration and vowel inherent spectral change (VISC). The cues of duration, F1 × F2 trajectory direction (TD) and trajectory length (TL) were assessed in listeners' categorization of /iː/ and /ɪə/ compared to /ɪ/. Duration was important for distinguishing both /iː/ and /ɪə/ from /ɪ/. TD and TL were important for categorizing /iː/ versus /ɪ/, whereas only TL was important for /ɪə/ versus /ɪ/. Finally, listeners' use of duration and VISC was not mutually affected for either vowel compared to /ɪ/.
Speech scientists have long noted that the qualities of naturally-produced vowels do not remain constant over their durations regardless of being nominally "monophthongs" or "diphthongs". Recent acoustic corpora show that there are consistent patterns of first (F1) and second (F2) formant frequency change across different vowel categories. The three Australian English (AusE) close front vowels /i:, 1, i/ provide a striking example: while their midpoint or mean F1 and F2 frequencies are virtually identical, their spectral change patterns distinctly differ. The results indicate that, despite the distinct patterns of spectral change of AusE /i:, i, la/ in production, its perceptual relevance is not uniform, but rather vowel-category dependent.
This study examines the influence of orthography on the processing of reduced word forms. For this purpose, we compared the impact of phonological variation with the impact of spelling-sound consistency on the processing of words that may be produced with or without the vowel schwa. Participants learnt novel French words in which the vowel schwa was present or absent in the first syllable. In Experiment 1, the words were consistently produced without schwa or produced in a variable manner (i.e., sometimes produced with and sometimes produced without schwa). In Experiment 2, words were always produced in a consistent manner, but an orthographic exposure phase was included in which words that were produced without schwa were either spelled with or without the letter < e >. Results from naming and eye-tracking tasks suggest that both phonological variation and spelling-sound consistency influence the processing of spoken novel words. However, the influence of phonological variation outweighs the effect of spelling-sound consistency. Our findings therefore suggest that the influence of orthography on the processing of reduced word forms is relatively small.
This tutorial analyzes voice onset time (VOT) data from Dongbei (Northeastern) Mandarin Chinese and North American English to demonstrate how Bayesian linear mixed models can be fit using the programming language Stan via the R package brms. Through this case study, we demonstrate some of the advantages of the Bayesian framework: researchers can (i) flexibly define the underlying process that they believe to have generated the data; (ii) obtain direct information regarding the uncertainty about the parameter that relates the data to the theoretical question being studied; and (iii) incorporate prior knowledge into the analysis. Getting started with Bayesian modeling can be challenging, especially when one is trying to model one’s own (often unique) data. It is difficult to see how one can apply general principles described in textbooks to one’s own specific research problem. We address this barrier to using Bayesian methods by providing three detailed examples, with source code to allow easy reproducibility. The examples presented are intended to give the reader a flavor of the process of model-fitting; suggestions for further study are also provided. All data and code are available from: https://osf.io/g4zpv.
It is well-known in statistics (e.g., Gelman & Carlin, 2014) that treating a result as publishable just because the p-value is less than 0.05 leads to overoptimistic expectations of replicability. These effects get published, leading to an overconfident belief in replicability. We demonstrate the adverse consequences of this statistical significance filter by conducting seven direct replication attempts (268 participants in total) of a recent paper (Levy & Keller, 2013). We show that the published claims are so noisy that even non-significant results are fully compatible with them. We also demonstrate the contrast between such small-sample studies and a larger-sample study; the latter generally yields a less noisy estimate but also a smaller effect magnitude, which looks less compelling but is more realistic. We reiterate several suggestions from the methodology literature for improving current practices.
Both social perception and temperament in young infants have been related to social functioning later in life. Previous functional Near-Infrared Spectroscopy (fNIRS) data (Lloyd-Fox et al., 2009) showed larger blood-oxygenation changes for social compared to non-social stimuli in the posterior temporal cortex of five-month-old infants. We sought to replicate and extend these findings by using fNIRS to study the neural basis of social perception in relation to infant temperament (Negative Affect) in 37 five-to-eight-month-old infants. Infants watched short videos displaying either hand and facial movements of female actors (social dynamic condition) or moving toys and machinery (non-social dynamic condition), while fNIRS data were collected over temporal brain regions. Negative Affect was measured using the Infant Behavior Questionnaire. Results showed significantly larger blood-oxygenation changes in the right posterior-temporal region in the social compared to the non-social condition. Furthermore, this differential activation was smaller in infants showing higher Negative Affect. Our results replicate those of Lloyd-Fox et al. and confirmed that five-to-eight-month-old infants show cortical specialization for social perception. Furthermore, the decreased cortical sensitivity to social stimuli in infants showing high Negative Affect may be an early biomarker for later difficulties in social interaction.
During a cue-distractor task, participants repeatedly produce syllables prompted by visual cues. Distractor syllables are presented to participants via headphones 150 ms after the visual cue (before any response). The task has been used to demonstrate perceptuomotor integration effects (perception effects on production): response times (RTs) speed up as the distractor shares more phonetic properties with the response. Here it is demonstrated that perceptuomotor integration is not limited to RTs. Voice Onset Times (VOTs) of the distractor syllables were systematically varied and their impact on responses was measured. Results demonstrate trial-specific convergence of response syllables to VOT values of distractor syllables.
In a preferential looking paradigm, we studied how children's looking behavior and pupillary response were modulated by the degree of phonological mismatch between the correct label of a target referent and its manipulated form. We manipulated degree of mismatch by introducing one or more featural changes to the target label. Both looking behavior and pupillary response were sensitive to degree of mismatch, corroborating previous studies that found differential responses in one or the other measure. Using time-course analyses, we present for the first time results demonstrating full separability among conditions (detecting difference not only between one vs. more, but also between two and three featural changes). Furthermore, the correct labels and small featural changes were associated with stable target preference, while large featural changes were associated with oscillating looking behavior, suggesting significant shifts in looking preference over time. These findings further support and extend the notion that early words are represented in great detail, containing subphonemic information.
Deep learning is a sub-field of machine learning that has recently gained substantial popularity in various domains such as computer vision, automatic speech recognition, natural language processing, and bioinformatics. Deep-learning techniques are able to learn complex feature representations from raw signals and thus also have potential to improve signal processing in the context of brain-computer interfaces (BCIs). However, they typically require large amounts of data for training - much more than what can often be provided with reasonable effort when working with brain activity recordings of any kind. In order to still leverage the power of deep-learning techniques with limited available data, special care needs to be taken when designing the BCI task, defining the structure of the deep model, and choosing the training method. This chapter presents example approaches for the specific scenario of music-based brain-computer interaction through electroencephalography - in the hope that these will prove to be valuable in different settings as well. We explain important decisions for the design of the BCI task and their impact on the models and training techniques that can be used. Furthermore, we present and compare various pre-training techniques that aim to improve the signal-to-noise ratio. Finally, we discuss approaches to interpret the trained models.
For Charles Goodwin, Chuck
(2018)
This appreciation will not be a testimonial to Chuck’s numerous publications and research achievements – I am sure that others will have a lot to say about those. Instead, I will say something about how I personally experienced and think of him, as a researcher personality, based on the limited time and the few occasions that we have had together.
We report two corpus analyses to examine the impact of animacy, definiteness, givenness and type of referring expression on the ordering of double objects in the spontaneous speech of German-speaking two- to four-year-old children and the child-directed speech of their mothers. The first corpus analysis revealed that definiteness, givenness and type of referring expression influenced word order variation in child language and child-directed speech when the type of referring expression distinguished between pronouns and lexical noun phrases. These results correspond to previous child language studies in English (e.g., de Marneffe et al. 2012). Extending the scope of previous studies, our second corpus analysis examined the role of different pronoun types on word order. It revealed that word order in child language and child-directed speech was predictable from the types of pronouns used. Different types of pronouns were associated with different sentence positions but also showed a strong correlation to givenness and definiteness. Yet, the distinction between pronoun types diminished the effects of givenness so that givenness had an independent impact on word order only in child-directed speech but not in child language. Our results support a multi-factorial approach to word order in German. Moreover, they underline the strong impact of the type of referring expression on word order and suggest that it plays a crucial role in the acquisition of the factors influencing word order variation.
Language and Arithmetic
(2018)
We examined cross-domain semantic priming effects between arithmetic and language. We paired subtractions with their linguistic equivalent, exception phrases (EPs) with positive quantifiers (e.g., "everybody except John") while pairing additions with their own linguistic equivalent, EPs with negative quantifiers (e.g., "nobody except John"; Moltmann, 1995). We hypothesized that EPs with positive quantifiers prime subtractions and inhibit additions while EPs with negative quantifiers prime additions and inhibit subtractions. Furthermore, we expected similar priming and inhibition effects from arithmetic into semantics. Our design allowed for a bidirectional analysis by using one trial's target as the prime for the next trial. Two experiments failed to show significant priming effects in either direction. Implications and possible shortcomings are explored in the general discussion.
Imageability is a psycholinguistic variable that indicates how well a word gives rise to a mental image or sensory experience. Imageability ratings are used extensively in psycholinguistic, neuropsychological, and aphasiological studies. However, little formal knowledge exists about whether and how these ratings are associated between and within languages. Fifteen imageability databases were cross-correlated using nonparametric statistics. Some of these corresponded to unpublished data collected within a European research network-the Collaboration of Aphasia Trialists (COST IS1208). All but four of the correlations were significant. The average strength of the correlations (rho = .68) and the variance explained (R (2) = 46%) were moderate. This implies that factors other than imageability may explain 54% of the results. Imageability ratings often correlate across languages. Different possibly interacting factors may explain the moderate strength and variance explained in the correlations: (1) linguistic and cultural factors; (2) intrinsic differences between the databases; (3) range effects; (4) small numbers of words in each database, equivalent words, and participants; and (5) mean age of the participants. The results suggest that imageability ratings may be used cross-linguistically. However, further understanding of the factors explaining the variance in the correlations will be needed before research and practical recommendations can be made.
Previous research with younger adults has revealed differences between native (L1) and non-native late-bilingual (L2) speakers with respect to how morphologically complex words are processed. This study examines whether these L1/L2 differences persist into old age. We tested masked-priming effects for derived and inflected word forms in older L1 and L2 speakers of German and compared them to results from younger L1 and L2 speakers on the same experiment (mean ages: 62 vs. 24). We found longer overall response times paired with better accuracy scores for older (L1 and L2) participants than for younger participants. The priming patterns, however, were not affected by chronological age. While both L1 and L2 speakers showed derivational priming, only the L1 speakers demonstrated inflectional priming. We argue that general performance in both L1 and L2 is affected by aging, but that the more profound differences between native and non-native processing persist into old age.
Purpose: This study reports on a cross-sectional investigation of lingual coarticulation in 57 typically developing German children (4 cohorts from 3.5 to 7 years of age) as compared with 12 adults. It examines whether the organization of lingual gestures for intrasyllabic coarticulation differs as a function of age and consonantal context. Method: Using the technique of ultrasound imaging, we recorded movement of the tongue articulator during the production of pseudowords, including various vocalic and consonantal contexts. Results: Results from linear mixed-effects models show greater lingual coarticulation in all groups of children as compared with adults with a significant decrease from the kindergarten years (at ages 3, 4, and 5 years) to the end of the 1st year into primary school (at age 7 years). Additional differences in coarticulation degree were found across and within age groups as a function of the onset consonant identity (/b/, / d/, and /g/). Conclusions: Results support the view that, although coarticulation degree decreases with age, children do not organize consecutive articulatory gestures with a uniform organizational scheme (e.g., segmental or syllabic). Instead, results suggest that coarticulatory organization is sensitive to the underlying articulatory properties of the segments combined.