Refine
Year of publication
- 2023 (211) (remove)
Document Type
- Doctoral Thesis (211) (remove)
Language
- English (211) (remove)
Is part of the Bibliography
- yes (211)
Keywords
- climate change (9)
- Klimawandel (8)
- machine learning (5)
- Arabidopsis thaliana (3)
- Modellierung (3)
- körperliche Fitness (3)
- maschinelles Lernen (3)
- metabolomics (3)
- photosynthesis (3)
- physical fitness (3)
Institute
- Institut für Biochemie und Biologie (51)
- Institut für Physik und Astronomie (28)
- Extern (24)
- Institut für Geowissenschaften (24)
- Institut für Chemie (19)
- Hasso-Plattner-Institut für Digital Engineering GmbH (17)
- Institut für Umweltwissenschaften und Geographie (11)
- Department Psychologie (7)
- Department Sport- und Gesundheitswissenschaften (7)
- Institut für Ernährungswissenschaft (7)
- Fachgruppe Betriebswirtschaftslehre (5)
- Department Linguistik (4)
- Fachgruppe Politik- & Verwaltungswissenschaft (4)
- Institut für Informatik und Computational Science (4)
- Institut für Mathematik (4)
- Digital Engineering Fakultät (3)
- Department Erziehungswissenschaft (2)
- Institut für Künste und Medien (2)
- Wirtschaftswissenschaften (2)
- Applied Computational Linguistics (1)
- Fachgruppe Soziologie (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Fakultät für Gesundheitswissenschaften (1)
- Institut für Anglistik und Amerikanistik (1)
- Institut für Jüdische Studien und Religionswissenschaft (1)
- Institut für Philosophie (1)
- Patholinguistics/Neurocognition of Language (1)
- Phonology & Phonetics (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Strukturbereich Kognitionswissenschaften (1)
- Öffentliches Recht (1)
In times of ongoing biodiversity loss, understanding how communities are structured and what mechanisms and local adaptations underlie the patterns we observe in nature is crucial for predicting how future ecological and anthropogenic changes might affect local and regional biodiversity. Aquatic zooplankton are a group of primary consumers that represent a critical link in the food chain, providing nutrients for the entire food web. Thus, understanding the adaptability and structure of zooplankton communities is essential. In this work, the genetic basis for the different temperature adaptations of two seasonally shifted (i.e., temperature-dependent) occurring freshwater rotifers of a formerly cryptic species complex (Brachionus calyciflorus) was investigated to understand the overall genetic diversity and evolutionary scenario for putative adaptations to different temperature regimes. Furthermore, this work aimed to clarify to what extent the different temperature adaptations may represent a niche partitioning process thus enabling co-existence. The findings were then embedded in a metacommunity context to understand how zooplankton communities assemble in a kettle hole metacommunity located in the northeastern German "Uckermark" and which underlying processes contribute to the biodiversity patterns we observe. Using a combined approach of newly generated mitochondrial resources (genomes/cds) and the analysis of a candidate gene (Heat Shock Protein 40kDa) for temperature adaptation, I showed that the global representatives of B. calyciflorus s.s.. are genetically more similar than B. fernandoi (average pairwise nucleotide diversity: 0.079 intraspecific vs. 0.257 interspecific) indicating that both species carry different standing genetic variation. In addition to differential expression in the thermotolerant B. calyciflorus s.s. and thermosensitive B. fernandoi, the HSP 40kDa also showed structural variation with eleven fixed and six positively selected sites, some of which are located in functional areas of the protein. The estimated divergence time of ~ 25-29 Myr combined with the fixed sites and a prevalence of ancestral amino acids in B. calyciflorus s.s. indicate that B. calyciflorus s.s. remained in the ancestral niche, while B. fernandoi partitioned into a new niche. The comparison of mitochondrial and nuclear markers (HPS 40kDa, ITS1, COI) revealed a hybridisation event between the two species. However, as hybridisation between the two species is rare, it can be concluded that the temporally isolated niches (i.e., seasonal-shifted occurrence) they inhabit based on their different temperature preferences most likely represent a pre-zygotic isolation mechanism that allows sympatric occurrence while maintaining species boundaries. To determine the processes underlying zooplankton community assembly, a zooplankton metacommunity comprising 24 kettle holes was sampled over a two-year period. Active (i.e., water samples) and dormant communities (i.e., dormant eggs hatched from sediment) were identified using a two-fragment DNA metabarcoding approach (COI and 18S). Species richness and diversity as well as community composition were analysed considering spatial, temporal and environmental parameters. The analysis revealed that environmental filtering based on parameters such as pH, size and location of the habitat patch (i.e., kettle hole) and surrounding field crops largely determined zooplankton community composition (explained variance: Bray-Curtis dissimilarities: 10.5%; Jaccard dissimilarities: 12.9%), indicating that adaptation to a particular habitat is a key feature of zooplankton species in this system. While the spatial configuration of the kettle holes played a minor role (explained variance: Bray-Curtis dissimilarities: 2.8% and Jaccard dissimilarities: 5.5%), the individual kettle hole sites had a significant influence on the community composition. This suggests monopolisation/priority effects (i.e., dormant communities) of certain species in individual kettle holes. As environmental filtering is the dominating process structuring zooplankton communities, this system could be significantly influenced by future land-use change, pollution and climate change.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
Individuals with aphasia vary in the speed and accuracy they perform sentence comprehension tasks. Previous results indicate that the performance patterns of individuals with aphasia vary between tasks (e.g., Caplan, DeDe, & Michaud, 2006; Caplan, Michaud, & Hufford, 2013a). Similarly, it has been found that the comprehension performance of individuals with aphasia varies between homogeneous test sentences within and between sessions (e.g., McNeil, Hageman, & Matthews, 2005). These studies ascribed the variability in the performance of individuals with aphasia to random noise. This conclusion would be in line with an influential theory on sentence comprehension in aphasia, the resource reduction hypothesis (Caplan, 2012). However, previous studies did not directly compare variability in language-impaired and language-unimpaired adults. Thus, it is still unclear how the variability in sentence comprehension differs between individuals with and without aphasia. Furthermore, the previous studies were exclusively carried out in English. Therefore, the findings on variability in sentence processing in English still need to be replicated in a different language.
This dissertation aims to give a systematic overview of the patterns of variability in sentence comprehension performance in aphasia in German and, based on this overview, to put the resource reduction hypothesis to the test. In order to reach the first aim, variability was considered on three different dimensions (persons, measures, and occasions) following the classification by Hultsch, Strauss, Hunter, and MacDonald (2011). At the dimension of persons, the thesis compared the performance of individuals with aphasia and language-unimpaired adults. At the dimension of measures, this work explored the performance across different sentence comprehension tasks (object manipulation, sentence-picture matching). Finally, at the dimension of occasions, this work compared the performance in each task between two test sessions. Several methods were combined to study variability to gain a large and diverse database. In addition to the offline comprehension tasks, the self-paced-listening paradigm and the visual world eye-tracking paradigm were used in this work.
The findings are in line with the previous results. As in the previous studies, variability in sentence comprehension in individuals with aphasia emerged between test sessions and between tasks. Additionally, it was possible to characterize the variability further using hierarchical Bayesian models. For individuals with aphasia, it was shown that both between-task and between-session variability are unsystematic. In contrast to that, language-unimpaired individuals exhibited systematic differences between measures and between sessions. However, these systematic differences occurred only in the offline tasks. Hence, variability in sentence comprehension differed between language-impaired and language-unimpaired adults, and this difference could be narrowed down to the offline measures.
Based on this overview of the patterns of variability, the resource reduction hypothesis was evaluated. According to the hypothesis, the variability in the performance of individuals with aphasia can be ascribed to random fluctuations in the resources available for sentence processing. Given that the performance of the individuals with aphasia varied unsystematically, the results support the resource reduction hypothesis. Furthermore, the thesis proposes that the differences in variability between language-impaired and language-unimpaired adults can also be explained by the resource reduction hypothesis. More specifically, it is suggested that the systematic changes in the performance of language-unimpaired adults are due to decreasing fluctuations in available processing resources. In parallel, the unsystematic variability in the performance of individuals with aphasia could be due to constant fluctuations in available processing resources. In conclusion, the systematic investigation of variability contributes to a better understanding of language processing in aphasia and thus enriches aphasia research.