Refine
Has Fulltext
- yes (447) (remove)
Year of publication
- 2022 (447) (remove)
Document Type
- Postprint (163)
- Doctoral Thesis (159)
- Article (35)
- Working Paper (21)
- Master's Thesis (18)
- Monograph/Edited Volume (17)
- Part of Periodical (12)
- Bachelor Thesis (8)
- Report (8)
- Conference Proceeding (2)
Is part of the Bibliography
- yes (447) (remove)
Keywords
- machine learning (7)
- climate change (6)
- COVID-19 (5)
- Klimawandel (5)
- Deutschland (4)
- Digitalisierung (4)
- Germany (4)
- Lateinunterricht (4)
- Lehrkräftebildung (4)
- exercise (4)
Institute
- Extern (59)
- Institut für Biochemie und Biologie (55)
- Strukturbereich Kognitionswissenschaften (34)
- Hasso-Plattner-Institut für Digital Engineering GmbH (33)
- Institut für Physik und Astronomie (31)
- Institut für Geowissenschaften (26)
- Fachgruppe Volkswirtschaftslehre (24)
- Center for Economic Policy Analysis (CEPA) (20)
- Institut für Chemie (20)
- Institut für Umweltwissenschaften und Geographie (17)
- Wirtschaftswissenschaften (16)
- Fachgruppe Betriebswirtschaftslehre (15)
- Historisches Institut (15)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (15)
- Department Erziehungswissenschaft (13)
- Institut für Ernährungswissenschaft (13)
- Department Sport- und Gesundheitswissenschaften (12)
- Institut für Romanistik (12)
- Department Linguistik (11)
- Fakultät für Gesundheitswissenschaften (11)
- MenschenRechtsZentrum (8)
- Institut für Mathematik (6)
- Referat für Presse- und Öffentlichkeitsarbeit (6)
- Strukturbereich Bildungswissenschaften (6)
- dbs Deutscher Bundesverband für akademische Sprachtherapie und Logopädie e.V. (6)
- Department Grundschulpädagogik (5)
- Sozialwissenschaften (5)
- Öffentliches Recht (5)
- Department Psychologie (4)
- Deutsches MEGA-Konsortialbüro an der Universität Potsdam (4)
- Digital Engineering Fakultät (4)
- Institut für Informatik und Computational Science (4)
- Department Musik und Kunst (3)
- Department für Inklusionspädagogik (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (3)
- Fachgruppe Soziologie (3)
- Institut für Anglistik und Amerikanistik (3)
- Institut für Germanistik (3)
- Institut für Philosophie (3)
- Institut für Slavistik (3)
- Psycholinguistics and Neurolinguistics (3)
- Humanwissenschaftliche Fakultät (2)
- Lehreinheit für Wirtschafts-Arbeit-Technik (2)
- Institut für Jüdische Theologie (1)
- Institut für Künste und Medien (1)
- Kommunalwissenschaftliches Institut (1)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (1)
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
Plant metabolism is the main process of converting assimilated carbon to different crucial compounds for plant growth and therefore crop yield, which makes it an important research topic. Although major advances in understanding genetic principles contributing to metabolism and yield have been made, little is known about the genetics responsible for trait variation or canalization although the concepts have been known for a long time. In light of a growing global population and progressing climate change, understanding canalization of metabolism and yield seems ever-more important to ensure food security. Our group has recently found canalization metabolite quantitative trait loci (cmQTL) for tomato fruit metabolism, showing that the concept of canalization applies on metabolism. In this work two approaches to investigate plant metabolic canalization and one approach to investigate yield canalization are presented.
In the first project, primary and secondary metabolic data from Arabidopsis thaliana and Phaseolus vulgaris leaf material, obtained from plants grown under different conditions was used to calculate cross-environment coefficient of variations or fold-changes of metabolite levels per genotype and used as input for genome wide association studies. While primary metabolites have lower CV across conditions and show few and mostly weak associations to genomic regions, secondary metabolites have higher CV and show more, strong metabolite to genome associations. As candidate genes, both potential regulatory genes as well as metabolic genes, can be found, albeit most metabolic genes are rarely directly related to the target metabolites, suggesting a role for both potential regulatory mechanisms as well as metabolic network structure for canalization of metabolism.
In the second project, candidate genes of the Solanum lycopersicum cmQTL mapping are selected and CRISPR/Cas9-mediated gene-edited tomato lines are created, to validate the genes role in canalization of metabolism. Obtained mutants appeared to either have strong aberrant developmental phenotypes or appear wild type-like. One phenotypically inconspicuous mutant of a pantothenate kinase, selected as candidate for malic acid canalization shows a significant increase of CV across different watering conditions. Another such mutant of a protein putatively involved in amino acid transport, selected as candidate for phenylalanine canalization shows a similar tendency to increased CV without statistical significance. This potential role of two genes involved in metabolism supports the hypothesis of structural relevance of metabolism for its own stability.
In the third project, a mutant for a putative disulfide isomerase, important for thylakoid biogenesis, is characterized by a multi-omics approach. The mutant was characterized previously in a yield stability screening and showed a variegated leaf phenotype, ranging from green leaves with wild type levels of chlorophyll over differently patterned variegated to completely white leaves almost completely devoid of photosynthetic pigments. White mutant leaves show wild type transcript levels of photosystem assembly factors, with the exception of ELIP and DEG orthologs indicating a stagnation at an etioplast to chloroplast transition state. Green mutant leaves show an upregulation of these assembly factors, possibly acting as overcompensation for partially defective disulfide isomerase, which seems sufficient for proper chloroplast development as confirmed by a wild type-like proteome. Likely as a result of this phenotype, a general stress response, a shift to a sink-like tissue and abnormal thylakoid membranes, strongly alter the metabolic profile of white mutant leaves. As the severity and pattern of variegation varies from plant to plant and may be effected by external factors, the effect on yield instability, may be a cause of a decanalized ability to fully exploit the whole leaf surface area for photosynthetic activity.
Digital transformation (DT) has not only been a major challenge in recent years, it is also supposed to continue to enormously impact our society and economy in the forthcoming decade. On the one hand, digital technologies have emerged, diffusing and determining our private and professional lives. On the other hand, digital platforms have leveraged the potentials of digital technologies to provide new business models. These dynamics have a massive effect on individuals, companies, and entire ecosystems. Digital technologies and platforms have changed the way persons consume or interact with each other. Moreover, they offer companies new opportunities to conduct their business in terms of value creation (e.g., business processes), value proposition (e.g., business models), or customer interaction (e.g., communication channels), i.e., the three dimensions of DT. However, they also can become a threat for a company's competitiveness or even survival. Eventually, the emergence, diffusion, and employment of digital technologies and platforms bear the potential to transform entire markets and ecosystems.
Against this background, IS research has explored and theorized the phenomena in the context of DT in the past decade, but not to its full extent. This is not surprising, given the complexity and pervasiveness of DT, which still requires far more research to further understand DT with its interdependencies in its entirety and in greater detail, particularly through the IS perspective at the confluence of technology, economy, and society. Consequently, the IS research discipline has determined and emphasized several relevant research gaps for exploring and understanding DT, including empirical data, theories as well as knowledge of the dynamic and transformative capabilities of digital technologies and platforms for both organizations and entire industries.
Hence, this thesis aims to address these research gaps on the IS research agenda and consists of two streams. The first stream of this thesis includes four papers that investigate the impact of digital technologies on organizations. In particular, these papers study the effects of new technologies on firms (paper II.1) and their innovative capabilities (II.2), the nature and characteristics of data-driven business models (II.3), and current developments in research and practice regarding on-demand healthcare (II.4). Consequently, the papers provide novel insights on the dynamic capabilities of digital technologies along the three dimensions of DT. Furthermore, they offer companies some opportunities to systematically explore, employ, and evaluate digital technologies to modify or redesign their organizations or business models.
The second stream comprises three papers that explore and theorize the impact of digital platforms on traditional companies, markets, and the economy and society at large. At this, paper III.1 examines the implications for the business of traditional insurance companies through the emergence and diffusion of multi-sided platforms, particularly in terms of value creation, value proposition, and customer interaction. Paper III.2 approaches the platform impact more holistically and investigates how the ongoing digital transformation and "platformization" in healthcare lastingly transform value creation in the healthcare market. Paper III.3 moves on from the level of single businesses or markets to the regulatory problems that result from the platform economy for economy and society, and proposes appropriate regulatory approaches for addressing these problems. Hence, these papers bring new insights on the table about the transformative capabilities of digital platforms for incumbent companies in particular and entire ecosystems in general.
Altogether, this thesis contributes to the understanding of the impact of DT on organizations and markets through the conduction of multiple-case study analyses that are systematically reflected with the current state of the art in research. On this empirical basis, the thesis also provides conceptual models, taxonomies, and frameworks that help describing, explaining, or predicting the impact of digital technologies and digital platforms on companies, markets and the economy or society at large from an interdisciplinary viewpoint.
Inter-brain synchronization is primarily investigated during social interactions but had not been examined during coupled muscle action between two persons until now. It was previously shown that mechanical muscle oscillations can develop coherent behavior between two isometrically interacting persons. This case study investigated if inter-brain synchronization appears thereby, and if differences of inter- and intrapersonal muscle and brain coherence exist regarding two different types of isometric muscle action. Electroencephalography (EEG) and mechanomyography/mechanotendography (MMG/MTG) of right elbow extensors were recorded during six fatiguing trials of two coupled isometrically interacting participants (70% MVIC). One partner performed holding and one pushing isometric muscle action (HIMA/PIMA; tasks changed). The wavelet coherence of all signals (EEG, MMG/MTG, force, ACC) were analyzed intra- and interpersonally. The five longest coherence patches in 8–15 Hz and their weighted frequency were compared between real vs. random pairs and between HIMA vs. PIMA. Real vs. random pairs showed significantly higher coherence for intra-muscle, intra-brain, and inter-muscle-brain activity (p < 0.001 to 0.019). Inter-brain coherence was significantly higher for real vs. random pairs for EEG of right and central areas and for sub-regions of EEG left (p = 0.002 to 0.025). Interpersonal muscle-brain synchronization was significantly higher than intrapersonal one, whereby it was significantly higher for HIMA vs. PIMA. These preliminary findings indicate that inter-brain synchronization can arise during muscular interaction. It is hypothesized both partners merge into one oscillating neuromuscular system. The results reinforce the hypothesis that HIMA is characterized by more complex control strategies than PIMA. The pilot study suggests investigating the topic further to verify these results on a larger sample size. Findings could contribute to the basic understanding of motor control and is relevant for functional diagnostics such as the manual muscle test which is applied in several disciplines, e.g., neurology, physiotherapy.
Previous research demonstrated a close bidirectional relationship between spatial attention and the manual motor system. However, it is unclear whether an explicit hand movement is necessary for this relationship to appear. A novel method with high temporal resolution–bimanual grip force registration–sheds light on this issue. Participants held two grip force sensors while being presented with lateralized stimuli (exogenous attentional shifts, Experiment 1), left- or right-pointing central arrows (endogenous attentional shifts, Experiment 2), or the words "left" or "right" (endogenous attentional shifts, Experiment 3). There was an early interaction between the presentation side or arrow direction and grip force: lateralized objects and central arrows led to a larger increase of the ipsilateral force and a smaller increase of the contralateral force. Surprisingly, words led to the opposite pattern: larger force increase in the contralateral hand and smaller force increase in the ipsilateral hand. The effect was stronger and appeared earlier for lateralized objects (60 ms after stimulus presentation) than for arrows (100 ms) or words (250 ms). Thus, processing visuospatial information automatically activates the manual motor system, but the timing and direction of this effect vary depending on the type of stimulus.
Die öffentliche Verwaltung wird in den nächsten Jahrzehnten große Reformen durchlaufen müssen. Ein wichtiger Einflussfaktor für das Gelingen von geplanten organisatorischen Veränderungen ist auch im Verwaltungskontext die Einstellung von Mitarbeiter*innen gegenüber diesem Wandel. Hier existieren gerade bei Mitarbeiter*innen ohne Führungsverantwortlichkeit häufig negative Einstellungen gegenüber Veränderungen. Dies wird auch mit dem hohen Stress in diesen Situationen in Verbindung gebracht. Besonders direkte Führungskräfte sind hier eine wichtige Einflussgröße auf die Einstellungen. Die folgende Bachelorarbeit beschäftigt sich dementsprechend mit diesen Einfluss und konzentriert sich dabei auf den Effekt der sozialen Unterstützung dieser Führungskräfte auf die Einstellung der Mitarbeiter*innen, da soziale Unterstützung einen nachgewiesenen mildernden Effekt auf den wahrgenommenen Stress besitzt.
Soziale Unterstützung wird nach der Social-Support-Theorie in die dort identifizierten vier Unter-arten differenziert, namentlich Appraisal, Emotional, Informational und Instrumental Support. Im Rahmen einer Literaturanalyse konnte für zwei der vier Supportarten (Emotional, Informational) ein Einfluss nachgewiesen werden. Auch für die anderen Supportarten bestehen Hinweise auf einen positiven Effekt. Dies weist darauf hin, dass direkte Führungskräfte während Reformen der öffentlichen Verwaltung als Quellen der Unterstützung fungieren und mittels dieser die Einstellung der Mitarbeiter*innen gegenüber Wandel positiv beeinflussen. Darüber hinaus können die Unterschiede der Ergebnisse nach Supportart darauf hinweisen, dass situationsspezifisch ver-schiedene Unterstützung mehr oder weniger relevant ist. Für Führungskräfte in diesem Kontext verweisen die Ergebnisse der Arbeit darauf, dass der unterstützende Kontakt mit direkt Untergebenen in Phasen des Wandels wichtig und die Anforderungen breiter als die reine Anweisung dieser Untergebenen sind.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
The process of number symbolization is assumed to be critically influenced by the acquisition of so-called verbal number skills (e.g., verbally reciting the number chain and naming Arabic numerals). For the acquisition of these verbal number skills, verbal and visuospatial skills are discussed as contributing factors. In this context, children’s verbal number skills have been found to be associated with their concurrent spatial language skills such as mastery of verbal descriptions of spatial position (e.g., in front of, behind). In a longitudinal study with three measurement times (T1, T2, T3) at an interval of about 6 months, we evaluated the predictive role of preschool children’s (mean age at T1: 3 years and 10 months) spatial language skills for the acquisition of verbal number skills. Children’s spatial language skills at T2 significantly predicted their verbal number skills at T3, when controlling for influences of important covariates such as vocabulary knowledge. In addition, further analyses replicated previous results indicating that children’s spatial language skills at T2 were associated with their verbal number skills at T2. Exploratory analyses further revealed that children’s verbal number skills at T1 predict their spatial language at T2. Results suggests that better spatial language skills at the age of 4 years facilitate the future acquisition of verbal number skills.
Numerical magnitude information is assumed to be spatially represented in the form of a mental number line defined with respect to a body-centred, egocentric frame of reference. In this context, spatial language skills such as mastery of verbal descriptions of spatial position (e.g., in front of, behind, to the right/left) have been proposed to be relevant for grasping spatial relations between numerical magnitudes on the mental number line. We examined 4- to 5-year-old’s spatial language skills in tasks that allow responses in egocentric and allocentric frames of reference, as well as their relative understanding of numerical magnitude (assessed by a number word comparison task). In addition, we evaluated influences of children’s absolute understanding of numerical magnitude assessed by their number word comprehension (montring different numbers using their fingers) and of their knowledge on numerical sequences (determining predecessors and successors as well as identifying missing dice patterns of a series). Results indicated that when considering responses that corresponded to the egocentric perspective, children’s spatial language was associated significantly with their relative numerical magnitude understanding, even after controlling for covariates, such as children’s SES, mental rotation skills, and also absolute magnitude understanding or knowledge on numerical sequences. This suggests that the use of egocentric reference frames in spatial language may facilitate spatial representation of numbers along a mental number line and thus seem important for preschoolers’ relative understanding of numerical magnitude.
The highly structured nature of the educational sector demands effective policy mechanisms close to the needs of the field. That is why evidence-based policy making, endorsed by the European Commission under Erasmus+ Key Action 3, aims to make an alignment between the domains of policy and practice. Against this background, this article addresses two issues: First, that there is a vertical gap in the translation of higher-level policies to local strategies and regulations. Second, that there is a horizontal gap between educational domains regarding the policy awareness of individual players. This was analyzed in quantitative and qualitative studies with domain experts from the fields of virtual mobility and teacher training. From our findings, we argue that the combination of both gaps puts the academic bridge from secondary to tertiary education at risk, including the associated knowledge proficiency levels. We discuss the role of digitalization in the academic bridge by asking the question: which value does the involved stakeholders expect from educational policies? As a theoretical basis, we rely on the model of value co-creation for and by stakeholders. We describe the used instruments along with the obtained results and proposed benefits. Moreover, we reflect on the methodology applied, and we finally derive recommendations for future academic bridge policies.
Extreme habitats often harbor specific communities that differ substantially from non-extreme habitats. In many cases, these communities are characterized by archaea, bacteria and protists, whereas the number of species of metazoa and higher plants is relatively low. In extremely acidic habitats, mostly prokaryotes and protists thrive, and only very few metazoa thrive, for example, rotifers. Since many studies have investigated the physiology and ecology of individual species, there is still a gap in research on direct, trophic interactions among extremophiles. To fill this gap, we experimentally studied the trophic interactions between a predatory protist (Actinophrys sol, Heliozoa) and its prey, the rotifers Elosa woralli and Cephalodella sp., the ciliate Urosomoida sp. and the mixotrophic protist Chlamydomonas acidophila (a green phytoflagellate, Chlorophyta). We found substantial predation pressure on all animal prey. High densities of Chlamydomonas acidophila reduced the predation impact on the rotifers by interfering with the feeding behaviour of A. sol. These trophic relations represent a natural case of intraguild predation, with Chlamydomonas acidophila being the common prey and the rotifers/ciliate and A. sol being the intraguild prey and predator, respectively. We further studied this intraguild predation along a resource gradient using Cephalodella sp. as the intraguild prey. The interactions among the three species led to an increase in relative rotifer abundance with increasing resource (Chlamydomonas) densities. By applying a series of laboratory experiments, we revealed the complexity of trophic interactions within a natural extremophilic community.
It is well-known that individuals with aphasia (IWA) have difficulties understanding sentences that involve non-adjacent dependencies, such as object relative clauses or passives (Caplan, Baker, & Dehaut, 1985; Caramazza & Zurif, 1976). A large body of research supports the view that IWA’s grammatical system is intact, and that comprehension difficulties in aphasia are caused by a processing deficit, such as a delay in lexical access and/or in syntactic structure building (e.g., Burkhardt, Piñango, & Wong, 2003; Caplan, Michaud, & Hufford, 2015; Caplan, Waters, DeDe, Michaud, & Reddy, 2007; Ferrill, Love, Walenski, & Shapiro, 2012; Hanne, Burchert, De Bleser, & Vasishth, 2015; Love, Swinney, Walenski, & Zurif, 2008). The main goal of this dissertation is to computationally investigate the processing sources of comprehension impairments in sentence processing in aphasia.
In this work, prominent theories of processing deficits coming from the aphasia literature are implemented within two cognitive models of sentence processing –the activation-based model (Lewis & Vasishth, 2005) and the direct-access model (McEl- ree, 2000)–. These models are two different expressions of the cue-based retrieval theory (Lewis, Vasishth, & Van Dyke, 2006), which posits that sentence processing is the result of a series of iterative retrievals from memory. These two models have been widely used to account for sentence processing in unimpaired populations in multiple languages and linguistic constructions, sometimes interchangeably (Parker, Shvarts- man, & Van Dyke, 2017). However, Nicenboim and Vasishth (2018) showed that when both models are implemented in the same framework and fitted to the same data, the models yield different results, because the models assume different data- generating processes. Specifically, the models hold different assumptions regarding the retrieval latencies. The second goal of this dissertation is to compare these two models of cue-based retrieval, using data from individuals with aphasia and control participants. We seek to answer the following question: Which retrieval mechanism is more likely to mediate sentence comprehension?
We model 4 subsets of existing data: Relative clauses in English and German; and control structures and pronoun resolution in German. The online data come from either self-paced listening experiments, or visual-world eye-tracking experiments. The offline data come from a complementary sentence-picture matching task performed at the end of the trial in both types of experiments. The two competing models of retrieval are implemented in the Bayesian framework, following Nicenboim and Vasishth (2018). In addition, we present a modified version of the direct-acess model that – we argue – is more suitable for individuals with aphasia.
This dissertation presents a systematic approach to implement and test verbally- stated theories of comprehension deficits in aphasia within cognitive models of sen- tence processing. The conclusions drawn from this work are that (a) the original direct-access model (as implemented here) cannot account for the full pattern of data from individuals with aphasia because it cannot account for slow misinterpretations; and (b) an activation-based model of retrieval can account for sentence comprehension deficits in individuals with aphasia by assuming a delay in syntactic structure building, and noise in the processing system. The overall pattern of results support an activation-based mechanism of memory retrieval, in which a combination of processing deficits, namely slow syntax and intermittent deficiencies, cause comprehension difficulties in individuals with aphasia.
The prevalence of obesity in the pediatric population has become a major public health issue. Indeed, the dramatic increase of this epidemic causes multiple and harmful consequences, Physical activity, particularly physical exercise, remains to be the cornerstone of interventions against childhood obesity. Given the conflicting findings with reference to the relevant literature addressing the effects of exercise on adiposity and physical fitness outcomes in obese children and adolescents, the effect of duration-matched concurrent training (CT) [50% resistance (RT) and 50% high-intensity-interval-training (HIIT)] on body composition and physical fitness in obese youth remains to be elucidated. Thus, the purpose of this study was to examine the effects of 9-weeks of CT compared to RT or HIIT alone, on body composition and selected physical fitness components in healthy sedentary obese youth. Out of 73 participants, only 37; [14 males and 23 females; age 13.4 ± 0.9 years; body-mass-index (BMI): 31.2 ± 4.8 kg·m-2] were eligible and randomized into three groups: HIIT (n = 12): 3-4 sets×12 runs at 80–110% peak velocity, with 10-s passive recovery between bouts; RT (n = 12): 6 exercises; 3–4 sets × 10 repetition maximum (RM) and CT (n = 13): 50% serial completion of RT and HIIT. CT promoted significant greater gains compared to HIIT and RT on body composition (p < 0.01, d = large), 6-min-walking test distance (6 MWT-distance) and on 6 MWT-VO2max (p < 0.03, d = large). In addition, CT showed substantially greater improvements than HIIT in the medicine ball throw test (20.2 vs. 13.6%, p < 0.04, d = large). On the other hand, RT exhibited significantly greater gains in relative hand grip strength (p < 0.03, d = large) and CMJ (p < 0.01, d = large) than HIIT and CT. CT promoted greater benefits for fat, body mass loss and cardiorespiratory fitness than HIIT or RT modalities. This study provides important information for practitioners and therapists on the application of effective exercise regimes with obese youth to induce significant and beneficial body composition changes. The applied CT program and the respective programming parameters in terms of exercise intensity and volume can be used by practitioners as an effective exercise treatment to fight the pandemic overweight and obesity in youth.
Localisation of deformation is a ubiquitous feature in continental rift dynamics and observed across drastically different time and length scales. This thesis comprises one experimental and two numerical modelling studies investigating strain localisation in (1) a ductile shear zone induced by a material heterogeneity and (2) in an active continental rift setting. The studies are related by the fact that the weakening mechanisms on the crystallographic and grain size scale enable bulk rock weakening, which fundamentally enables the formation of shear zones, continental rifts and hence plate tectonics. Aiming to investigate the controlling mechanisms on initiation and evolution of a shear zone, the torsion experiments of the experimental study were conducted in a Patterson type apparatus with strong Carrara marble cylinders with a weak, planar Solnhofen limestone inclusion. Using state-of-the-art numerical modelling software, the torsion experiments were simulated to answer questions regarding localisation procedure like stress distribution or the impact of rheological weakening. 2D numerical models were also employed to integrate geophysical and geological data to explain characteristic tectonic evolution of the Southern and Central Kenya Rift. Key elements of the numerical tools are a randomized initial strain distribution and the usage of strain softening. During the torsion experiments, deformation begins to localise at the limestone inclusion tips in a process zone, which propagates into the marble matrix with increasing deformation until a ductile shear zone is established. Minor indicators for coexisting brittle deformation are found close to the inclusion tip and presumed to slightly facilitate strain localisation besides the dominant ductile deformation processes. The 2D numerical model of the torsion experiment successfully predicts local stress concentration and strain rate amplification ahead of the inclusion in first order agreement with the experimental results. A simple linear parametrization of strain weaking enables high accuracy reproduction of phenomenological aspects of the observed weakening. The torsion experiments suggest that loading conditions do not affect strain localisation during high temperature deformation of multiphase material with high viscosity contrasts. A numerical simulation can provide a way of analysing the process zone evolution virtually and extend the examinable frame. Furthermore, the nested structure and anastomosing shape of an ultramylonite band was mimicked with an additional second softening step. Rheological weakening is necessary to establish a shear zone in a strong matrix around a weak inclusion and for ultramylonite formation.
Such strain weakening laws are also incorporated into the numerical models of the
Southern and Central Kenya Rift that capture the characteristic tectonic evolution. A three-stage early rift evolution is suggested that starts with (1) the accommodation of strain by a single border fault and flexure of the hanging-wall crust, after which (2) faulting in the hanging-wall and the basin centre increases before (3) the early-stage asymmetry is lost and basinward localisation of deformation occurs. Along-strike variability of rifts can be produced by modifying the initial random noise distribution. In summary, the three studies address selected aspects of the broad range of mechanisms and processes that fundamentally enable the deformation of rock and govern the localisation patterns across the scales. In addition to the aforementioned results, the first and second manuscripts combined, demonstrate a procedure to find new or improve on existing numerical formulations for specific rheologies and their dynamic weakening. These formulations are essential in addressing rock deformation from the grain to the global scale. As within the third study of this thesis, where geodynamic controls on the evolution of a rift were examined and acquired by the integration of geological and geophysical data into a numerical model.
This thesis is analyzing multiple coordination challenges which arise with the digital transformation of public administration in federal systems, illustrated by four case studies in Germany. I make various observations within a multi-level system and provide an in-depth analysis. Theoretical explanations from both federalism research and neo-institutionalism are utilized to explain the findings of the empirical driven work. The four articles evince a holistic picture of the German case and elucidate its role as a digital government laggard. Their foci range from macro, over meso to micro level of public administration, differentiating between the governance and the tool dimension of digital government.
The first article shows how multi-level negotiations lead to expensive but eventually satisfying solutions for the involved actors, creating a subtle balance between centralization and decentralization. The second article identifies legal, technical, and organizational barriers for cross-organizational service provision, highlighting the importance of inter-organizational and inter-disciplinary exchange and both a common language and trust. Institutional change and its effects on the micro level, on citizens and the employees in local one-stop shops, mark the focus of the third article, bridging the gap between reforms and the administrative reality on the local level. The fourth article looks at the citizens’ perspective on digital government reforms, their expectations, use and satisfaction. In this vein, this thesis provides a detailed account of the importance of understanding the digital divide and therefore the necessity of reaching out to different recipients of digital government reforms. I draw conclusions from the factors identified as causes for Germany’s shortcomings for other federal systems where feasible and derive reform potential therefrom. This allows to gain a new perspective on digital government and its coordination challenges in federal contexts.
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
Cosmic rays (CRs) are a ubiquitous and an important component of astrophysical environments such as the interstellar medium (ISM) and intracluster medium (ICM). Their plasma physical interactions with electromagnetic fields strongly influence their transport properties. Effective models which incorporate the microphysics of CR transport are needed to study the effects of CRs on their surrounding macrophysical media. Developing such models is challenging because of the conceptional, length-scale, and time-scale separation between the microscales of plasma physics and the macroscales of the environment. Hydrodynamical theories of CR transport achieve this by capturing the evolution of CR population in terms of statistical moments. In the well-established one-moment hydrodynamical model for CR transport, the dynamics of the entire CR population are described by a single statistical quantity such as the commonly used CR energy density. In this work, I develop a new hydrodynamical two-moment theory for CR transport that expands the well-established hydrodynamical model by including the CR energy flux as a second independent hydrodynamical quantity. I detail how this model accounts for the interaction between CRs and gyroresonant Alfvén waves. The small-scale magnetic fields associated with these Alfvén waves scatter CRs which fundamentally alters CR transport along large-scale magnetic field lines. This leads to the effects of CR streaming and diffusion which are both captured within the presented hydrodynamical theory. I use an Eddington-like approximation to close the hydrodynamical equations and investigate the accuracy of this closure-relation by comparing it to high-order approximations of CR transport. In addition, I develop a finite-volume scheme for the new hydrodynamical model and adapt it to the moving-mesh code Arepo. This scheme is applied using a simulation of a CR-driven galactic wind. I investigate how CRs launch the wind and perform a statistical analysis of CR transport properties inside the simulated circumgalactic medium (CGM). I show that the new hydrodynamical model can be used to explain the morphological appearance of a particular type of radio filamentary structures found inside the central molecular zone (CMZ). I argue that these harp-like features are synchrotron-radiating CRs which are injected into braided magnetic field lines by a point-like source such as a stellar wind of a massive star or a pulsar. Lastly, I present the finite-volume code Blinc that uses adaptive mesh refinement (AMR) techniques to perform simulations of radiation and magnetohydrodynamics (MHD). The mesh of Blinc is block-structured and represented in computer memory using a graph-based approach. I describe the implementation of the mesh graph and how a diffusion process is employed to achieve load balancing in parallel computing environments. Various test problems are used to verify the accuracy and robustness of the employed numerical algorithms.
High-throughput proteomics approaches have resulted in large-scale protein–protein interaction (PPI) networks that have been employed for the prediction of protein complexes. However, PPI networks contain false-positive as well as false-negative PPIs that affect the protein complex prediction algorithms. To address this issue, here we propose an algorithm called CUBCO+ that: (1) employs GO semantic similarity to retain only biologically relevant interactions with a high similarity score, (2) based on link prediction approaches, scores the false-negative edges, and (3) incorporates the resulting scores to predict protein complexes. Through comprehensive analyses with PPIs from Escherichia coli, Saccharomyces cerevisiae, and Homo sapiens, we show that CUBCO+ performs as well as the approaches that predict protein complexes based on recently introduced graph partitions into biclique spanned subgraphs and outperforms the other state-of-the-art approaches. Moreover, we illustrate that in combination with GO semantic similarity, CUBCO+ enables us to predict more accurate protein complexes in 36% of the cases in comparison to CUBCO as its predecessor.
Die vorliegende Arbeit vertritt die These, dass Hegels Wissenschaft der Logik mit einer Konzeption von Absolutheit Ernst zu machen versucht, nach der es kein Außerhalb des Absoluten geben kann. Dies macht sich bereits im Anfang der Logik bemerkbar: Wenn es nichts außerhalb des Absoluten geben kann, dann darf auch der Anfang nicht außerhalb des Absoluten sein. Folglich kann der Anfang nur mit dem Absoluten gemacht werden. Das Setzen des Anfangs als absolut ist aber gleichzeitig ein Testen des Anfangs auf seine Absolutheit. Diese Prüfung kann der Anfang nicht bestehen. Denn es liegt im Wesen eines Anfangs, nur Anfang und nicht das Ganze und somit nicht das Absolute zu sein. Der Anfang ist am weitesten davon entfernt, das Ganze zu sein, und muss folglich als das Nicht-Absoluteste innerhalb der Logik betrachtet werden. Also ist er beides: Er ist ein Anfang mit dem Absoluten und er ist ein Anfang mit dem Nicht-Absolutesten. Die Logik widerspricht sich bereits in ihrem Anfang. Von diesem Widerspruch muss sie sich befreien. Diese Befreiung treibt den Gang vom Anfang fort. Dies erzeugt den Fortgang der Logik. Die anfängliche Bestimmung hebt sich auf und geht in ihre Folgebestimmung über. Die Folgebestimmung wird ihrerseits absolut gesetzt, kann dieser Setzung aber ebenfalls nicht gerecht werden und hebt sich in ihre Folgebestimmung auf. Eine jede Bestimmung, die auf den Anfang folgt, durchläuft diese Bewegung des Absolutsetzens, Daran-Scheiterns und Sich-Aufhebens, bis – ganz am Ende der Logik – ebendiese Bewegung als dasjenige erkannt wird, was allein vermögend ist, dem Anspruch auf Absolutheit zu genügen. Denn wenn eine jede Bestimmung dieser Bewegung unterworfen ist, dann gibt es kein Außerhalb zu dieser Bewegung. Und also muss sie das gesuchte Absolute sein.
Auf ihrem Weg hin zur wahren Bedeutung des Absoluten kehrt die Logik immer wieder in die Bestimmung ihres Anfangs zurück, um Voraussetzungen einzuholen, die in Zusammenhang mit ihrem Anfang gemacht werden mussten. Für das Einholen dieser Voraussetzungen werden folgende Textstellen von Interesse sein: der Übergang in die Wesenslogik, der Übergang in die Begriffslogik und das Schlusskapitel. Denn auch zuallerletzt, in ihrem Ende kehrt die Logik in ihren Anfang zurück. Entsprechend kann mit Hegel gesagt werden: Das Erste ist auch das Letzte und das Letzte ist auch das Erste.
Das Orientierungspraktikum (OP)/Integrierte Eingangspraktikum (IEP) ist die erste schulpraktische Phase im Potsdamer Modell der Lehrerbildung. Mit diesem (Hospitations-)Praktikum (im 1. – 2. Fachsemester des Bachelorstudiums) soll der Perspektivwechsel von der Schüler:innen- zur Lehrer:innenrolle angestoßen und der Lehrer:innenberuf, die Institution Schule und der Unterricht durch eine beobachtende Perspektive reflektiert werden. Im Beitrag werden erste Ergebnisse der quantitativen und qualitativen Analyse zum OP/ IEP im Rahmen des PSI-Projekts „Kompetenzerwerb in Schulpraktischen Studien – Spiralcurriculum“ diskutiert. Aus den nachgewiesenen Effekten – u. a. Aktivierung der Reflexionsfähigkeit und Förderung der persönlichen Entwicklung – und den analysierten Bedürfnissen der Studierenden werden abschließend in zwei Diskussionssträngen „Konzeption und Handlungsrahmen“ sowie „Unterstützung und Begleitung“ Empfehlungen formuliert, die wichtige Impulse zur Weiterentwicklung des Praktikums und der begleitenden Lehrveranstaltungen liefern.
Die fachdidaktischen Tagespraktika (FTP) bilden ein Kernelement im Potsdamer Modell der Lehrerbildung, weist man ihnen doch eine „studienleitende Funktion“ zu. Wie aber realisiert sich diese Funktion in den einzelnen Fächern an der Universität Potsdam und welche Folgen ergeben sich für die Ausbildung der Lehramtsstudierenden ? Zur Beantwortung dieser Frage wurde eine Analyse der Verankerung der FTP in allen Studienordnungen hinsichtlich qualitativer (Inhalte und Ziele, Prüfungsformen, Belegungsvoraussetzungen) und quantitativer (Leistungspunkte, Semesterwochenstunden) Kriterien durchgeführt. Leitfadengestützte Interviews mit verantwortlichen Fachdidaktikerinnen und Fachdidaktikern dienten der Untersuchung der konkreten Umsetzung und der Relevanzzuschreibung. Ziel war es, durch das Zusammenführen beider Zugänge – der realiter existierenden Curricula, der individualisierten Praktiken sowie der subjektiven Überzeugungen – ein Verständnis eben jener „studienleitenden Funktion“ zu erlangen und anschließend Diskussions- und Handlungsfelder für die Weiterentwicklung des FTP herauszuarbeiten.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.
This dataset comprises tree inventories and damage assessments performed in Namibia's semi-arid Zambezi Region. Data were sampled in savannas and savanna woodlands along steep gradients of elephant population densities to capture the effects of those (and other) disturbances on individual-level and stand-level aboveground woody biomass (AGB). The dataset contains raw data on dendrometric measures and processed data on specific wood density (SWD), woody aboveground biomass, and biomass losses through disturbance impacts. Allometric proxies (height, canopy diameters, and in adult trees also stem circumferences) were recorded for n = 6,179 tree and shrub individuals. Wood samples were taken for each encountered species to measure specific wood density.
These measurements have been used to estimate woody aboveground biomass via established allometric models, advanced through our improved methodologies and workflows that accounted for tree and shrub architecture shaped by disturbance impacts. To this end, we performed a detailed damage assessment on each woody individual in the field. In addition to estimations of standing biomass, our new method also delivered data on biomass losses to different disturbance agents (elephants, fire, and others) on the level of plant individuals and stands.
The data presented here have been used within a study published with Ecological Indicators (Kindermann et al., 2022) to evaluate the benefits of our improved methodology in comparison to a standard reference method of aboveground biomass estimations. Additionally, it has been employed in a study on carbon storage and sequestration in vegetation and soils (Sandhage-Hofmann et al., 2021).
The raw data of dendrometric measurements can be subjected to other available allometric models for biomass estimation. The processed data can be used to analyze disturbance impacts on woody aboveground biomass, or for regional carbon storage estimates. The data on species-specific wood density can be used for application to other dendrometric datasets to (re-) estimate biomass through allometric models requiring wood density. It can further be used for plant functional trait analyses.
Cutting-edge hyperscanning methods led to a paradigm shift in social neuroscience. It allowed researchers to measure dynamic mutual alignment of neural processes between two or more individuals in naturalistic contexts. The ever-growing interest in hyperscanning research calls for the development of transparent and validated data analysis methods to further advance the field. We have developed and tested a dual electroencephalography (EEG) analysis pipeline, namely DEEP. Following the preprocessing of the data, DEEP allows users to calculate Phase Locking Values (PLVs) and cross-frequency PLVs as indices of inter-brain phase alignment of dyads as well as time-frequency responses and EEG power for each participant. The pipeline also includes scripts to control for spurious correlations. Our goal is to contribute to open and reproducible science practices by making DEEP publicly available together with an example mother-infant EEG hyperscanning dataset.
Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training.
In den vergangenen Jahren hat der im anglo-amerikanischen Rechtsraum wurzelnde Amicus Curiae, wenn auch in unterschiedlicher Ausprägung, Eingang in die Verwaltungsgerichtsbarkeiten in Deutschland und Frankreich gefunden. Dabei erweist sich die französische Verwaltungsgerichtsordnung aus rechtsvergleichender Sicht als progressiv, da das Verfahrensinstrument hier – im Gegensatz zur deutschen Rechtslage – bereits positiv-rechtlich normiert ist. Diese Fortschrittlichkeit hat sich bisher jedoch nicht merklich auf die Drittinterventionspraxis niedergeschlagen, besitzen Amicus Curiae-Stellungnahmen doch in beiden Ländern und über alle verwaltungsgerichtlichen Instanzen hinweg noch immer Seltenheitswert.
Da mithin keine Generalisierungen zur dieser Rechtspraxis erlaubt sind, kann sich eine Analyse der möglichen funktionalen Rolle derartiger Amicus Curiae-Stellungnahmen nur auf theoretische Überlegungen stützen. Danach ist eine Informationsfunktion gegenüber dem Gericht in Bezug auf Tatsachen- und Rechtsfragen klar zu bejahen. Auch dürfte der Verfahrensmechanismus ein zusätzliches – wenngleich nicht demokratisches – Legitimationspotential für gerichtliche Entscheidungen besitzen: Indem dieser gesellschaftliche Teilhabe und damit gleichzeitig die Einbettung verwaltungsgerichtlicher Verfahren in den jeweiligen sozialen Kontext ermöglicht, kann er zur Steigerung der gesellschaftlichen Akzeptanz der zunehmend unter Rechtsfertigungsdruck geratenden Richtermacht beitragen.
Wheat alpha-amylase/trypsin inhibitors remain a subject of interest considering the latest findings showing their implication in wheat-related non-celiac sensitivity (NCWS). Understanding their functions in such a disorder is still unclear and for further study, the need for pure ATI molecules is one of the limiting problems. In this work, a simplified approach based on the successive fractionation of ATI extracts by reverse phase and ion exchange chromatography was developed. ATIs were first extracted from wheat flour using a combination of Tris buffer and chloroform/methanol methods. The separation of the extracts on a C18 column generated two main fractions of interest F1 and F2. The response surface methodology with the Doehlert design allowed optimizing the operating parameters of the strong anion exchange chromatography. Finally, the seven major wheat ATIs namely P01083, P17314, P16850, P01085, P16851, P16159, and P83207 were recovered with purity levels (according to the targeted LC-MS/MS analysis) of 98.2 ± 0.7; 98.1 ± 0.8; 97.9 ± 0.5; 95.1 ± 0.8; 98.3 ± 0.4; 96.9 ± 0.5, and 96.2 ± 0.4%, respectively. MALDI-TOF-MS analysis revealed single peaks in each of the pure fractions and the mass analysis yielded deviations of 0.4, 1.9, 0.1, 0.2, 0.2, 0.9, and 0.1% between the theoretical and the determined masses of P01083, P17314, P16850, P01085, P16851, P16159, and P83207, respectively. Overall, the study allowed establishing an efficient purification process of the most important wheat ATIs. This paves the way for further in-depth investigation of the ATIs to gain more knowledge related to their involvement in NCWS disease and to allow the absolute quantification in wheat samples.
These days design thinking is no longer a “new approach”. Among practitioners, as well as academics, interest in the topic has gathered pace over the last two decades. However, opinions are divided over the longevity of the phenomenon: whether design thinking is merely “old wine in new bottles,” a passing trend, or still evolving as it is being spread to an increasing number of organizations and industries. Despite its growing relevance and the diffusion of design thinking, knowledge on the actual status quo in organizations remains scarce. With a new study, the research team of Prof. Uebernickel and Stefanie Gerken investigates temporal developments and changes in design thinking practices in organizations over the past six years comparing the results of the 2015 “Parts without a whole” study with current practices and future developments. Companies of all sizes and from different parts of the world participated in the survey. The findings from qualitative interviews with experts, i.e., people who have years of knowledge with design thinking, were cross-checked with the results from an exploratory analysis of the survey data. This analysis uncovers significant variances and similarities in how design thinking is interpreted and applied in businesses.
Stellar interferometry is the only method in observational astronomy for obtaining the highest resolution images of astronomical targets. This method is based on combining light from two or more separate telescopes to obtain the complex visibility that contains information about the brightness distribution of an astronomical source. The applications of stellar interferometry have made significant contributions in the exciting research areas of astronomy and astrophysics, including the precise measurement of stellar diameters, imaging of stellar surfaces, observations of circumstellar disks around young stellar objects, predictions of Einstein's General relativity at the galactic center, and the direct search for exoplanets to name a few. One important related technique is aperture masking interferometry, pioneered in the 1960s, which uses a mask with holes at the re-imaged pupil of the telescope, where the light from the holes is combined using the principle of stellar interferometry. While this can increase the resolution, it comes with a disadvantage. Due to the finite size of the holes, the majority of the starlight (typically > 80 %) is lost at the mask, thus limiting the signal-to-noise ratio (SNR) of the output images. This restriction of aperture masking only to the bright targets can be avoided using pupil remapping interferometry - a technique combining aperture masking interferometry and advances in photonic technologies using single-mode fibers. Due to the inherent spatial filtering properties, the single-mode fibers can be placed at the focal plane of the re-imaged pupil, allowing the utilization of the whole pupil of the telescope to produce a high-dynamic range along with high-resolution images. Thus, pupil remapping interferometry is one of the most promising application areas in the emerging field of astrophotonics.
At the heart of an interferometric facility, a beam combiner exists whose primary function is to combine light to obtain high-contrast fringes. A beam combiner can be as simple as a beam splitter or an anamorphic lens to combine light from 2 apertures (or telescopes) or as complex as a cascade of beam splitters and lenses to combine light for > 2 apertures. However, with the field of astrophotonics, interferometric facilities across the globe are increasingly employing some form of photonics technologies by using single-mode fibers or integrated optics (IO) chips as an efficient way to combine light from several apertures. The state-of-the-art instrument - GRAVITY at the very large telescope interferometer (VLTI) facility uses an IO-based beam combiner device reaching visibilities accuracy of better than < 0.25 %, which is roughly 50× as precise as a few decades back.
Therefore, in the context of IO-based components for applications in stellar interferometry, this Thesis describes the work towards the development of a 3-dimensional (3-D) IO device - a monolithic astrophotonics component containing both the pupil remappers and a discrete beam combiner (DBC). In this work, the pupil remappers are 3-D single-mode waveguides in a glass substrate collecting light from the re-imaged pupil of the telescope and feeding the light to a DBC, where the combination takes place. The DBC is a lattice of 3-D single-mode waveguides, which interact through evanescent coupling. By observing the output power of single-mode waveguides of the DBC, the visibilities are retrieved by using a calibrated transfer matrix ({U}) of the device.
The feasibility of the DBC in retrieving the visibilities theoretically and experimentally had already been studied in the literature but was only limited to laboratory tests with monochromatic light sources. Thus, a part of this work extends these studies by investigating the response of a 4-input DBC to a broad-band light source. Hence, the objectives of this Thesis are the following: 1) Design an IO device for broad-band light operation such that accurate and precise visibilities could be retrieved experimentally at astronomical H-band (1.5-1.65 μm), and 2) Validation of the DBC as a possible beam combination scheme for future interferometric facilities through on-sky testing at the William Herschel Telescope (WHT).
This work consisted of designing three different 3-D IO devices. One of the popular methods for fabricating 3-D photonic components in a glass substrate is ultra-fast laser inscription (ULI). Thus, manufacturing of the designed devices was outsourced to Politecnico di Milano as part of an iterative fabrication process using their state-of-the-art ULI facility. The devices were then characterized using a 2-beam Michelson interferometric setup obtaining both the monochromatic and polychromatic visibilities. The retrieved visibilities for all devices were in good agreement as predicted by the simulation results of a DBC, which confirms both the repeatability of the ULI process and the stability of the Michelson setup, thus fulfilling the first objective.
The best-performing device was then selected for the pupil-remapping of the WHT using a different optical setup consisting of a deformable mirror and a microlens array. The device successfully collected stellar photons from Vega and Altair. The visibilities were retrieved using a previously calibrated {U} but showed significant deviations from the expected results. Based on the analysis of comparable simulations, it was found that such deviations were primarily caused by the limited SNR of the stellar observations, thus constituting a first step towards the fulfillment of the second objective.
Fiber-based microfluidics has undergone many innovative developments in recent years, with exciting examples of portable, cost-effective and easy-to-use detection systems already being used in diagnostic and analytical applications. In water samples, Legionella are a serious risk as human pathogens. Infection occurs through inhalation of aerosols containing Legionella cells and can cause severe pneumonia and may even be fatal. In case of Legionella contamination of water-bearing systems or Legionella infection, it is essential to find the source of the contamination as quickly as possible to prevent further infections. In drinking, industrial and wastewater monitoring, the culture-based method is still the most commonly used technique to detect Legionella contamination. In order to improve the laboratory-dependent determination, the long analysis times of 10-14 days as well as the inaccuracy of the measured values in colony forming units (CFU), new innovative ideas are needed. In all areas of application, for example in public, commercial or private facilities, rapid and precise analysis is required, ideally on site.
In this PhD thesis, all necessary single steps for a rapid DNA-based detection of Legionella were developed and characterized on a fiber-based miniaturized platform. In the first step, a fast, simple and device-independent chemical lysis of the bacteria and extraction of genomic DNA was established. Subsequently, different materials were investigated with respect to their non-specific DNA retention. Glass fiber filters proved to be particularly suitable, as they allow recovery of the DNA sample from the fiber material in combination with dedicated buffers and exhibit low autofluorescence, which was important for fluorescence-based readout.
A fiber-based electrophoresis unit was developed to migrate different oligonucleotides within a fiber matrix by application of an electric field. A particular advantage over lateral flow assays is the targeted movement, even after the fiber is saturated with liquid. For this purpose, the entire process of fiber selection, fiber chip patterning, combination with printed electrodes, and testing of retention and migration of different DNA samples (single-stranded, double-stranded and genomic DNA) was performed. DNA could be pulled across the fiber chip in an electric field of 24 V/cm within 5 minutes, remained intact and could be used for subsequent detection assays e.g., polymerase chain reaction (PCR) or fluorescence in situ hybridization (FISH). Fiber electrophoresis could also be used to separate DNA from other components e.g., proteins or cell lysates or to pull DNA through multiple layers of the glass microfiber. In this way, different fragments experienced a moderate, size-dependent separation. Furthermore, this arrangement offers the possibility that different detection reactions could take place in different layers at a later time. Electric current and potential measurements were collected to investigate the local distribution of the sample during migration. While an increase in current signal at high concentrations indicated the presence of DNA samples, initial experiments with methylene blue stained DNA showed a temporal sequence of signals, indicating sample migration along the chip.
For the specific detection of a Legionella DNA, a FISH-based detection with a molecular beacon probe was tested on the glass microfiber. A specific region within the 16S rRNA gene of Legionella spp. served as a target. For this detection, suitable reaction conditions and a readout unit had to be set up first. Subsequently, the sensitivity of the probe was tested with the reverse complementary target sequence and the specificity with several DNA fragments that differed from the target sequence. Compared to other DNA sequences of similar length also found in Legionella pneumophila, only the target DNA was specifically detected on the glass microfiber. If a single base exchange is present or if two bases are changed, the probe can no longer distinguish between the DNA targets and non-targets. An analysis with this specificity can be achieved with other methods such as melting point determination, as was also briefly indicated here. The molecular beacon probe could be dried on the glass microfiber and stored at room temperature for more than three months, after which it was still capable of detecting the target sequence. Finally, the feasibility of fiber-based FISH detection for genomic Legionella DNA was tested. Without further processing, the probe was unable to detect its target sequence in the complex genomic DNA. However, after selecting and application of appropriate restriction enzymes, specific detection of Legionella DNA against other aquatic pathogens with similar fragment patterns as Acinetobacter haemolyticus was possible.
Developmental Gains in Physical Fitness Components of Keyage and Older-than-Keyage Third-Graders
(2022)
Children who were enrolled according to legal enrollment dates (i.e., keyage third-graders aged eight to nine years) exhibit a positive linear physical fitness development (Fühner et al., 2021). However, children who were enrolled with a delay of one year or who repeated a grade (i.e., older-than-keyage children [OTK] aged nine to ten years in third grade) appear to exhibit a poorer physical fitness relative to what could be expected given their chronological age (Fühner et al., 2022). However, because Fühner et al. (2022) compared the performance of OTK children to predicted test scores that were extrapolated based on the data of keyage children, the observed physical fitness of these children could either indicate a delayed physical-fitness development or some physiological or psychological changes occurring during the tenth year of life. We investigate four hypotheses about this effect. (H1) OTK children are biologically younger than keyage children. A formula transforming OTK’s chronological age into a proxy for their biological age brings some of the observed cross-sectional age-related development in line with the predicted age-related development based on the data of keyage children, but large negative group differences remain. Hypotheses 2 to 4 were tested with a longitudinal assessment. (H2) Physiological changes due to biological maturation or psychological factors cause a stagnation of physical fitness development in the tenth year of life. H2 predicts a decline of performance from third to fourth grade also for keyage children. (H3) OTK children exhibit an age-related (temporary) developmental delay in the tenth year of life, but later catch up to the performance of age-matched keyage children. H3 predicts a larger developmental gain for OTK than for keyage children from third to fourth grade. (H4) OTK children exhibit a sustained physical fitness deficit and do not catch up over time. H4 predicts a positive development for keyage and OTK children, with no greater development for OTK compared to keyage children. The longitudinal study was based on a subset of children from the EMOTIKON project (www.uni-potsdam.de/emotikon). The physical fitness (cardiorespiratory endurance [6-minute-run test], coordination [star-run test], speed [20-m sprint test], lower [standing long jump test] and upper [ball push test] limbs muscle power, and balance [one-legged stance test]) of 1,274 children (1,030 keyage and 244 OTK children) from 32 different schools was tested in third grade and retested one year later in fourth grade. Results: (a) Both keyage and OTK children exhibit a positive longitudinal development from third to fourth grade in all six physical fitness components. (b) There is no evidence for a different longitudinal development of keyage and OTK children. (c) Keyage children (approximately 9.5 years in fourth grade) outperform age-matched OTK children (approximately 9.5 years in third grade) in all six physical fitness components. The results show that the physical fitness of OTK children is indeed impaired and are in support of a sustained difference in physical fitness between the groups of keyage and OTK children (H4).
Die vorliegende Dissertation verfolgt das Ziel, die diagnostischen Möglichkeiten für das Stö-rungsbild der erworbenen Dyslexie bei deutschsprachigen Personen mit Dyslexie (PmD) zu erweitern und zu spezifizieren.
In der Literatur werden verschiedene Sprachverarbeitungsmodelle diskutiert, die den kognitiven Prozess der Schriftsprachverarbeitung zu erklären versuchen. Alle Überlegungen, Erhebungen und Analysen dieser Dissertation fußen auf den theoretischen Annahmen des kognitiven Zwei-Routen-Lesemodells, welches zwischen lexikalisch-semantischer und segmentaler, sub-lexikalischer Verarbeitung beim Lesen unterscheidet und so die voneinander unabhängigen Fähigkeiten zum Lesen bekannter und unbekannter Wörter abbilden kann. Mit dem im Rahmen der Dissertation entwickelten, kognitiv orientierten Diagnostikverfahren DYMO (Dyslexie Mo-dellorientiert) soll durch die Erhebung der Lesefähigkeiten von PmD eine möglichst genaue modelltheoretische Verortung der Lesebeeinträchtigung erreicht und eine Grundlage für die Planung einer lesebezogenen Therapie geschaffen werden. Dabei werden auch Modellkomponenten des Zwei-Routen-Lesemodells berücksichtigt, die bisher im deutschsprachigen Raum noch nicht etabliert sind. Dazu zählen Unterkomponenten der Visuellen Analyse, die für die Identifikation von Buchstaben und das Kodieren von Buchstabenpositionen verantwortlich sind und Unterkomponenten der segmentalen Leseroute, die den einzelheitlichen Leseprozess auf dieser Modellroute schrittweise abbilden. Das Itemmaterial aus DYMO ist nach diversen psycholinguistisch kontrollierten Variablen kontrolliert. Hierbei werden auch Variablen berücksichtigt, die bisher in der Dyslexiediagnostik für deutschsprachige PmD nicht systematisch erfasst werden können, wie die Wortlänge und die graphematische Komplexität von Pseudowörtern.
Die erste dieser Dissertation zugrundeliegende Publikation (Originalarbeit I) befasst sich mit den Parametern und Modellkomponenten, die für eine umfassende modelltheoretisch basierte Di-agnostik bei erworbener Dyslexie entscheidend sind. Es werden außerdem Überlegungen zu Fehlertypen-Kategorisierung angestellt.
Die zweite Publikation (Originalarbeit II) stellt das Testverfahren DYMO dar. Das dazugehörige Handbuch liefert detaillierte Informationen zum Aufbau und der Konstruktion des Testverfah-rens, zur Durchführung und Auswertung der einzelnen Untertests und zur Einstufung einer Leistung in einen Leistungsbereich. Anhand von ausführlich beschriebenen Fallbeispielen zweier PmD werden die Durchführung, Auswertung, Interpretation und das Ableiten von Therapiezielen dargestellt. Die Ergebnisse dieser Fallbeschreibungen verdeutlichen die diagnostische Ergänzung durch DYMO und zeigen, dass das explizite Untersuchen der Unterkomponenten der Visuellen Analyse und der segmentalen Leseroute sowie der Einbezug der Variablen Wortlänge und gra-phematische Komplexität den Lesebefund spezifizieren und den Therapieeinstieg konkretisieren können.
Die dritte Publikation (Originalarbeit III) zeigt in einer systematischen Vergleichsstudie anhand einer Fallserie von zwölf PmD die Unterschiede zwischen dem Diagnostikverfahren DYMO und einem weiteren kognitiv basierten Diagnostikverfahren. Es wird diskutiert, inwieweit DYMO eine sinnvolle Ergänzung im Diagnostikprozess erworbener Dyslexien darstellen kann. Außerdem werden leicht und schwer beeinträchtigte PmD in Gruppenanalysen verglichen, um zu prüfen, ob DYMO insbesondere bei leicht beeinträchtigten PmD eine Ergänzung bieten kann. Aufgrund des komplexeren Itemmaterials von DYMO (beispielsweise aufgrund der Kontrolle der Wortlänge) wurde angenommen, dass leicht beeinträchtigte PmD in DYMO-Untertests auffälligere Leseleistungen zeigen als in Aufgaben des gegenübergestellten anderen Diagnostikverfahrens. Diese Hypothese konnte teilweise bestätigt werden. Leicht beeinträchtigte PmD zeigten häufiger Längeneffekte als schwer beeinträchtigte PmD. Insgesamt fiel der Gruppenunterschied jedoch nicht so deutlich aus, wie erwartet.
Mit dem kriteriumsorientiert normierten und finalisierten Material von DYMO wurden 17 PmD getestet. Ausführliche Befunde für jede einzelne PmD mit darauffolgenden Therapieimplikationen zeigen, dass insbesondere die Spezifizierung eines segmentalen Lesedefizits bei einer schwer beeinträchtigten Leistung im Lesen von Pseudowörtern zur erweiterten Aussage bezüglich des modelltheoretischen Störungsortes beitragen kann. Dies verdeutlicht die hohe Aussagekraft der DYMO-Untertests und die Relevanz einer spezifischen und detaillierten modellbasierten Befunderhebung für eine explizite, individuelle Therapieplanung bei erworbenen Dyslexien.
Überzeugungen zum Lehren und Lernen sind als Teil der professionellen Kompetenz von Lehrkräften bereits im Lehramtsstudium relevant und haben insbesondere in längeren Praxisphasen Entwicklungspotenzial. Welche Faktoren für die Entwicklung von Überzeugungen in Praxisphasen von Bedeutung sind, ist bislang aber nur unzureichend erforscht. In Interviews haben wir N = 16 Studierende befragt, welche Lerngelegenheiten für die Entwicklung ihrer Überzeugungen im Praxissemester eine Rolle spielten. Dabei konnten wir mittels Inhaltsanalyse vier übergeordnete Faktoren identifizieren: die universitäre Lernbegleitung, die Mentorinnen und Mentoren, die Schülerinnen und Schüler und die Reflexion eigener Unterrichtserfahrungen. Den Faktoren wurden untergeordnete Faktoren (z. B. Hospitationen durch Universitätsdozierende) zugeordnet und es wird dargestellt, warum und unter welchen Umständen diese Lerngelegenheiten für die Entwicklung der Überzeugungen aus Studierendensicht relevant sind.
Digitale Medien sind aus unserem Alltag kaum noch wegzudenken. Einer der zentralsten Bereiche für unsere Gesellschaft, die schulische Bildung, darf hier nicht hintanstehen. Wann immer der Einsatz digital unterstützter Tools pädagogisch sinnvoll ist, muss dieser in einem sicheren Rahmen ermöglicht werden können. Die HPI Schul-Cloud ist dieser Vision gefolgt, die vom Nationalen IT-Gipfel 2016 angestoßen wurde und dem Bericht vorangestellt ist – gefolgt. Sie hat sich in den vergangenen fünf Jahren vom Pilotprojekt zur unverzichtbaren IT-Infrastruktur für zahlreiche Schulen entwickelt. Während der Corona-Pandemie hat sie für viele Tausend Schulen wichtige Unterstützung bei der Umsetzung ihres Bildungsauftrags geboten. Das Ziel, eine zukunftssichere und datenschutzkonforme Infrastruktur zur digitalen Unterstützung des Unterrichts zur Verfügung zu stellen, hat sie damit mehr als erreicht. Aktuell greifen rund 1,4 Millionen Lehrkräfte und Schülerinnen und Schüler bundesweit und an den deutschen Auslandsschulen auf die HPI Schul-Cloud zu.
„Europäische Bildung beginnt in der Schule.“ Gerade in Zeiten einer Renaissance von Nationalismen und einem spürbaren Rechtsruck in Europa scheint diese Maxime wichtiger denn je zu sein. Die umfassendste Möglichkeit, mittel- und langfristig eine europäische Dimension in den Schulen der EU-Mitgliedsstaaten zu verankern, stellt eine binationale oder sogar internationale Lehramtsausbildung dar. Die Einrichtung derartiger Ausbildungen ist jedoch mit hohen Hürden verbunden. Ihre Anzahl ist überschaubar und allein im deutsch-französischen Kontext vorhanden. Hintergrund hierfür sind erstens die nur schwer zu überwindbaren Hindernisse, die sich aus den stark divergierenden Studien-, Rekrutierungs- und Ausbildungssystemen ergeben. Zweitens ist der Lehramtsbereich besonders stark durch Reformen geprägt. Eine Nutzen-Kosten-Analyse der häufig benötigten und ressourcenintensiven Anpassungen von Programmen auf der einen Seite und der geringen Anzahl der Absolventinnen und Absolventen auf der anderen Seite fällt demnach an vielen Universitäten negativ aus. Ein Rückblick auf die seit 2000 bestehenden Bemühungen der Kooperation Mainz-Dijon hinterlässt eine durchmischte Bilanz. Die Gelegenheit, die lehramtsbezogene binationale Ausbildung dieser Kooperation integrierter zu gestalten, bietet die sich derzeit auf französischer Seite vollziehende Neustrukturierung der französischen Lehramtsausbildung. Die Loi Blanquer vom 26. Juli 2019 führt zu einer Annäherung der beiden Systeme und ermöglicht – auch dank bereits bestehender juristischer Instrumente – eine Verkürzung der Ausbildungszeit sowie eine verbesserte Anerkennungspraxis.
Die Reform des Gemeinsamen Europäischen Asylsystems (GEAS) ist eine der größten Herausforderungen und eine der drängendsten Aufgaben der EU und ihrer Mitgliedstaaten. Dabei stellt die Frage der „gerechten Lastenteilung“ in der Asyl- und Migrationspolitik den Zusammenhalt der EU auf eine Zerreißprobe. Seit den gescheiterten Verhandlungen über die GEAS-Reform 2016/2017 versuchen die Mitgliedstaaten, einen Ausgleich zwischen den Grundsätzen der Solidarität und Verantwortlichkeit zu finden, wie es Art. 80 AEUV für das GEAS vorgibt. Je nach Interessenlage verbirgt sich dahinter aber ein sehr unterschiedliches Verständnis.
Diese Arbeit untersucht die Reformbemühungen beim GEAS nach Vorlage der Kommissionsvorschläge im September 2020 und beleuchtet die divergierenden Interessenlagen der Mitgliedstaaten hinsichtlich Aufnahme und Verteilung von Geflüchteten. Ziel der Arbeit ist, eine Aussage über die Erfolgsaussichten einer Einigung über die Grundsätze der Solidarität und Verantwortung zu treffen. Dazu werden zunächst die Verpflichtungen im Asylrecht basierend auf internationalen Übereinkommen wie der Genfer Flüchtlingskonvention dargestellt. An-schließend werden GEAS und Dublin-System, das dem Ersteinreisestaat die Zuständigkeit für die Asylverfahren zuschreibt, und die Ursachen für sein Scheitern analysiert.
Diese Verantwortungsteilung, die zu einer überproportionalen Belastung der Mitgliedstaaten im Süden führt, ist Kristallisationspunkt für Konflikte, gegenseitigen Vorwürfe und Misstrau-en zwischen den Mitgliedstaaten. Infolge einer tatsächlichen Überlastung und teilweise selbst verschuldeten Unmöglichkeit, die GEAS-Verpflichtungen zu erfüllen, rufen die Südstaaten nach Unterstützung aus dem Norden und betreiben teilweise sogar eine Politik des Laissez-Passer. Durch teilweise katastrophale Zustände bei Verfahren, Unterbringung und Versorgung der Geflüchteten entstehen Rückführungshindernisse und Druck auf die Zielstaaten, mehr Solidarität zu leisten.
Ausgehend von diesem Befund wird der Bedeutungsgehalt des Solidaritätsprinzips in Art. 80 AEUV in normativer und deskriptiver Hinsicht untersucht. Normativ handelt es sich dabei um eine abstrakte Rechtspflicht zur gegenseitigen Unterstützung, deren Ausgestaltung im politischen Ermessen der Mitgliedstaaten liegt. Deskriptiv kann unter „Solidarität“ der Zweck verstanden werden, dass die Verwirklichung individueller Interessen einer kollektiven Anstrengung bedarf, die wiederum das Gemeinwohl fördert und somit im Interesse aller liegt. Dem folgend müssten alle Mitgliedstaaten ein Interesse an der Bewältigung der Herausforderungen der Migration nach Europa haben.
Die Interessen der Mitgliedstaten deuten aber auf etwas anderes hin. Die durch die Ankünfte von Schutzsuchenden aus dem Süden stark belasteten Mittelmeeranrainer wie Griechenland und Italien fordern eine Abkehr vom Dublin-System. Die migrationskritischen Visegrád-Staaten verweigern im Grunde jede Unterstützung bei der Aufnahme und berufen sich darauf, dass sie ihre rechtlichen Verpflichtungen erfüllen. Staaten, die lange Zeit eine liberale Migrationspolitik verfolgten und beliebte Zielländer waren wie Schweden, ringen nach der Migrationskrise 2015/2016 mit sich auf der Suche nach einem migrationspolitischen Kurs, der rechts-populistische Kräfte nicht noch weiter erstarken lässt. Auch die Hauptzielländer Deutschland und Frankreich versuchen den jeweiligen innenpolitischen Diskursen entsprechend, die Sekundärmigration zu verhindern und wollen auf unterschiedliche Weise die Außengrenzstaaten unterstützen, wobei Deutschland die Umverteilung aller unterstützt.
Die im September 2020 vorgelegten Vorschläge der Kommission versuchen, den unterschiedlichen Interessen Rechnung zu tragen. Durch die Schaffung eines Grenzverfahrens soll die Anzahl der in die EU einreisenden und zu verteilenden Geflüchteten reduziert werden. Durch Änderung der Dublin-Kriterien soll die Zuständigkeit der potentiellen Zielländer erweitert werden, um die Südländer zu entlasten und der Sekundärmigration entgegenzuwirken. Mit der gleichen Zielrichtung soll auf Grundlage eines neuen Solidaritätsmechanismus eine Umverteilung unbegleiteter Minderjähriger und aus Seenot Geretteter erfolgen. In Krisenzeiten soll daraus eine generelle Umverteilung aller Schutzsuchenden erwachsen, wobei Solidarität weiterhin auf verschiedene Art und Weise geleistet werden können soll.
Angesichts der Verhandlungen während der deutschen EU-Ratspräsidentschaft und des er-reichten Zwischenergebnisses besteht Skepsis, dass die Mitgliedstaaten sich bald auf eine GEAS-Reform einigen werden. Dazu liegen die Interessen der Mitgliedstaaten auch hinsichtlich der Solidarität zu weit auseinander. Zudem stellt sich die in Hinblick auf die europäische Integration und die Zukunft der EU besorgniserregende Frage, worin das im Interesse aller liegende Gemeinwohl in der Asylpolitik liegen soll, das die gemeinsame Kraftanstrengung zu einem individuellen Interesse jedes Einzelnen werden lässt. Denn anders als bei der Schaffung des Schengen-Raums als Raum ohne Binnengrenzen sind Wohlstandsgewinne von der Aufnahme Geflüchteter vorerst nicht zu erwarten.
Die Transformation der öffentlichen Verantwortung im Bereich der sozialen Wohlfahrt führte in den letzten Jahren zu einem gestiegenen Forschungsinteresse an Mitarbeiten-den, die sich an der Schnittstelle zwischen öffentlicher Verwaltung und direktem Kontakt zu Klient*innen befinden. Die vorliegende Arbeit geht am Beispiel der Schulsozialarbeit an Potsdamer Grundschulen der Frage nach, inwieweit Vertrauen in Klient*innen die Nutzung von Ermessensspielräumen durch Schulsozialarbeiter*innen beeinflusst. Das Street-Level Bureaucracy Framework nach Michael Lipsky spannt dabei den theoretischen Rahmen, während qualitative Interviews mit Schulsozialarbei-ter*innen die Basis für die Beantwortung der Forschungsfrage darstellen. Die Ergebnis-se zeigen, dass ein geringeres Maß an Vertrauen in Klient*innen dafür sorgt, dass Schulsozialarbeiter*innen durch Bewältigungsstrategien wie der Rationierung von Res-sourcen und dem gedanklichen Rückzug von Klient*innen versuchen, ihre Arbeitslast zu verringern. Ein höheres Maß an Vertrauen in Klient*innen sorgt hingegen dafür, dass sie ihre Ermessensspielräume zu Gunsten dieser Klient*innen nutzen, zum Beispiel durch das Umgehen von Datenschutzregeln zur effektiveren Fallbearbeitung.
Diet analysis of bats killed at wind turbines suggests large-scale losses of trophic interactions
(2022)
Agricultural practice has led to landscape simplification and biodiversity decline, yet recently, energy-producing infrastructures, such as wind turbines, have been added to these simplified agroecosystems, turning them into multi-functional energy-agroecosystems. Here, we studied the trophic interactions of bats killed at wind turbines using a DNA metabarcoding approach to shed light on how turbine-related bat fatalities may possibly affect local habitats. Specifically, we identified insect DNA in the stomachs of common noctule bats (Nyctalus noctula) killed by wind turbines in Germany to infer in which habitats these bats hunted. Common noctule bats consumed a wide variety of insects from different habitats, ranging from aquatic to terrestrial ecosystems (e.g., wetlands, farmland, forests, and grasslands). Agricultural and silvicultural pest insects made up about 20% of insect species consumed by the studied bats. Our study suggests that the potential damage of wind energy production goes beyond the loss of bats and the decline of bat populations. Bat fatalities at wind turbines may lead to the loss of trophic interactions and ecosystem services provided by bats, which may add to the functional simplification and impaired crop production, respectively, in multi-functional ecosystems.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
Digitale Musikmedien und -technologien in der Musiklehrer*innenausbildung an der Universität Potsdam
(2022)
It is estimated that data scientists spend up to 80% of the time exploring, cleaning, and transforming their data. A major reason for that expenditure is the lack of knowledge about the used data, which are often from different sources and have heterogeneous structures. As a means to describe various properties of data, metadata can help data scientists understand and prepare their data, saving time for innovative and valuable data analytics. However, metadata do not always exist: some data file formats are not capable of storing them; metadata were deleted for privacy concerns; legacy data may have been produced by systems that were not designed to store and handle meta- data. As data are being produced at an unprecedentedly fast pace and stored in diverse formats, manually creating metadata is not only impractical but also error-prone, demanding automatic approaches for metadata detection.
In this thesis, we are focused on detecting metadata in CSV files – a type of plain-text file that, similar to spreadsheets, may contain different types of content at arbitrary positions. We propose a taxonomy of metadata in CSV files and specifically address the discovery of three different metadata: line and cell type, aggregations, and primary keys and foreign keys.
Data are organized in an ad-hoc manner in CSV files, and do not follow a fixed structure, which is assumed by common data processing tools. Detecting the structure of such files is a prerequisite of extracting information from them, which can be addressed by detecting the semantic type, such as header, data, derived, or footnote, of each line or each cell. We propose the supervised- learning approach Strudel to detect the type of lines and cells. CSV files may also include aggregations. An aggregation represents the arithmetic relationship between a numeric cell and a set of other numeric cells. Our proposed AggreCol algorithm is capable of detecting aggregations of five arithmetic functions in CSV files. Note that stylistic features, such as font style and cell background color, do not exist in CSV files. Our proposed algorithms address the respective problems by using only content, contextual, and computational features.
Storing a relational table is also a common usage of CSV files. Primary keys and foreign keys are important metadata for relational databases, which are usually not present for database instances dumped as plain-text files. We propose the HoPF algorithm to holistically detect both constraints in relational databases. Our approach is capable of distinguishing true primary and foreign keys from a great amount of spurious unique column combinations and inclusion dependencies, which can be detected by state-of-the-art data profiling algorithms.
Botulinum neurotoxin (BoNT) is used for the treatment of a number of ailments. The activity of the toxin that is isolated from bacterial cultures is frequently tested in the mouse lethality assay. Apart from the ethical concerns inherent to this assay, species-specific differences in the affinity for different BoNT serotypes give rise to activity results that differ from the activity in humans. Thus, BoNT/B is more active in mice than in humans. The current study shows that the stimulus-dependent release of a luciferase from a differentiated human neuroblastoma–based reporter cell line (SIMA-hPOMC1-26-Gluc) was inhibited by clostridial and recombinant BoNT/A to the same extent, whereas both clostridial and recombinant BoNT/B inhibited the release to a lesser extent and only at much higher concentrations, reflecting the low activity of BoNT/B in humans. By contrast, the genetically modified BoNT/B-MY, which has increased affinity for human synaptotagmin, and the BoNT/B protein receptor inhibited luciferase release effectively and with an EC50 comparable to recombinant BoNT/A. This was due to an enhanced uptake into the reporter cells of BoNT/B-MY in comparison to the recombinant wild-type toxin. Thus, the SIMA-hPOMC1-26-Gluc cell assay is a versatile tool to determine the activity of different BoNT serotypes providing human-relevant dose-response data.
Infectious diseases are an increasing threat to biodiversity and human health. Therefore, developing a general understanding of the drivers shaping host-pathogen dynamics is of key importance in both ecological and epidemiological research. Disease dynamics are driven by a variety of interacting processes such as individual host behaviour, spatiotemporal resource availability or pathogen traits like virulence and transmission. External drivers such as global change may modify the system conditions and, thus, the disease dynamics. Despite their importance, many of these drivers are often simplified and aggregated in epidemiological models and the interactions among multiple drivers are neglected.
In my thesis, I investigate disease dynamics using a mechanistic approach that includes both bottom-up effects - from landscape dynamics to individual movement behaviour - as well as top-down effects - from pathogen virulence on host density and contact rates. To this end, I extended an established spatially explicit individual-based model that simulates epidemiological and ecological processes stochastically, to incorporate a dynamic resource landscape that can be shifted away from the timing of host population-dynamics (chapter 2). I also added the evolution of pathogen virulence along a theoretical virulence-transmission trade-off (chapter 3). In chapter 2, I focus on bottom-up effects, specifically how a temporal shift of resource availability away from the timing of biological events of host-species - as expected under global change - scales up to host-pathogen interactions and disease dynamics. My results show that the formation of temporary disease hotspots in combination with directed individual movement acted as key drivers for pathogen persistence even under highly unfavourable conditions for the host. Even with drivers like global change further increasing the likelihood of unfavourable interactions between host species and their environment, pathogens can continue to persist with heir hosts. In chapter 3, I demonstrate that the top-down effect caused by pathogen-associated mortality on its host population can be mitigated by selection for lower virulent pathogen strains when host densities are reduced through mismatches between seasonal resource availability and host life-history events. I chapter 4, I combined parts of both theoretical models into a new model that includes individual host movement decisions and the evolution of pathogenic virulence to simulate pathogen outbreaks in realistic landscapes. I was able to match simulated patterns of pathogen spread to observed patterns from long-term outbreak data of classical swine fever in wild boar in Northern Germany. The observed disease course was best explained by a simulated high virulent strain, whereas sampling schemes and vaccination campaigns could explain differences in the age-distribution of infected hosts. My model helps to understand and disentangle how the combination of individual decision making and evolution of virulence can act as important drivers of pathogen spread and persistence.
As I show across the chapters of this thesis, the interplay of both bottom-up and top-down processes is a key driver of disease dynamics in spatially structured host populations, as they ultimately shape host densities and contact rates among moving individuals. My findings are an important step towards a paradigm shift in disease ecology away from simplified assumptions towards the inclusion of mechanisms, such as complex multi-trophic interactions, and their feedbacks on pathogen spread and disease persistence. The mechanisms presented here should be at the core of realistic predictive and preventive epidemiological models.
In this paper we examine the effect of uncertainty on readers’ predictions about meaning. In particular, we were interested in how uncertainty might influence the likelihood of committing to a specific sentence meaning. We conducted two event-related potential (ERP) experiments using particle verbs such as turn down and manipulated uncertainty by constraining the context such that readers could be either highly certain about the identity of a distant verb particle, such as turn the bed […] down, or less certain due to competing particles, such as turn the music […] up/down. The study was conducted in German, where verb particles appear clause-finally and may be separated from the verb by a large amount of material. We hypothesised that this separation would encourage readers to predict the particle, and that high certainty would make prediction of a specific particle more likely than lower certainty. If a specific particle was predicted, this would reflect a strong commitment to sentence meaning that should incur a higher processing cost if the prediction is wrong. If a specific particle was less likely to be predicted, commitment should be weaker and the processing cost of a wrong prediction lower. If true, this could suggest that uncertainty discourages predictions via an unacceptable cost-benefit ratio. However, given the clear predictions made by the literature, it was surprisingly unclear whether the uncertainty manipulation affected the two ERP components studied, the N400 and the PNP. Bayes factor analyses showed that evidence for our a priori hypothesised effect sizes was inconclusive, although there was decisive evidence against a priori hypothesised effect sizes larger than 1μV for the N400 and larger than 3μV for the PNP. We attribute the inconclusive finding to the properties of verb-particle dependencies that differ from the verb-noun dependencies in which the N400 and PNP are often studied.