Refine
Year of publication
Document Type
- Doctoral Thesis (3162) (remove)
Language
- English (3162) (remove)
Keywords
- climate change (52)
- Klimawandel (51)
- Modellierung (27)
- Nanopartikel (22)
- machine learning (21)
- Blickbewegungen (17)
- Fernerkundung (17)
- Arabidopsis thaliana (16)
- Arktis (15)
- Synchronisation (15)
Institute
- Institut für Biochemie und Biologie (738)
- Institut für Physik und Astronomie (551)
- Institut für Geowissenschaften (407)
- Institut für Chemie (347)
- Extern (142)
- Institut für Informatik und Computational Science (129)
- Institut für Mathematik (124)
- Institut für Umweltwissenschaften und Geographie (117)
- Institut für Ernährungswissenschaft (114)
- Department Linguistik (97)
- Hasso-Plattner-Institut für Digital Engineering GmbH (92)
- Wirtschaftswissenschaften (86)
- Department Psychologie (71)
- Sozialwissenschaften (45)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (42)
- Institut für Anglistik und Amerikanistik (38)
- Department Sport- und Gesundheitswissenschaften (31)
- Fachgruppe Politik- & Verwaltungswissenschaft (16)
- Fachgruppe Betriebswirtschaftslehre (15)
- Strukturbereich Kognitionswissenschaften (14)
- Philosophische Fakultät (13)
- Öffentliches Recht (11)
- Department Erziehungswissenschaft (10)
- Institut für Philosophie (9)
- Digital Engineering Fakultät (7)
- Fachgruppe Volkswirtschaftslehre (7)
- Institut für Germanistik (7)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (7)
- Institut für Jüdische Studien und Religionswissenschaft (6)
- Institut für Künste und Medien (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Fachgruppe Soziologie (3)
- Historisches Institut (3)
- Psycholinguistics and Neurolinguistics (3)
- Institut für Jüdische Theologie (2)
- Institut für Romanistik (2)
- Multilingualism (2)
- Patholinguistics/Neurocognition of Language (2)
- Applied Computational Linguistics (1)
- Bürgerliches Recht (1)
- Department Grundschulpädagogik (1)
- Fakultät für Gesundheitswissenschaften (1)
- Foundations of Computational Linguistics (1)
- Institut für Religionswissenschaft (1)
- Institut für Slavistik (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Interdisziplinäres Zentrum für Kognitive Studien (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Phonology & Phonetics (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
- Syntax, Morphology & Variability (1)
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Diglossic translanguaging
(2024)
This book examines how German-speaking Jews living in Berlin make sense and make use of their multilingual repertoire. With a focus on lexical variation, the book demonstrates how speakers integrate Yiddish and Hebrew elements into German for indexing belonging and for positioning themselves within the Jewish community. Linguistic choices are shaped by language ideologies (e.g., authenticity, prescriptivism, nostalgia). Speakers translanguage when using their multilingual repertoire, but do so in a diglossic way, using elements from different languages for specific domains
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
The biosecurity individual
(2024)
Discoveries in biomedicine and biotechnology, especially in diagnostics, have made prevention and (self)surveillance increasingly important in the context of health practices. Frederike Offizier offers a cultural critique of the intersection between health, security and identity, and explores how the focus on risk and security changes our understanding of health and transforms our relationship to our bodies. Analyzing a wide variety of texts, from life writing to fiction, she offers a critical intervention on how this shift in the medical gaze produces new paradigms of difference and new biomedically facilitated identities: biosecurity individuals.
How do the rights of same-sex couples have to be ensured by states, and which kind of environmental obligations are induced by the right to life and to personal integrity? Questions as diverse and far-reaching as these are regularly dealt with by the Inter-American Court of Human Rights in its advisory function. This book is the first comprehensive, non-Spanish-written treatise on the advisory function of this Court. It analyzes the scope of the Court's advisory jurisdiction and its procedural practice in comparison with that of other international courts. Moreover, the legal effects of the Court’s advisory opinions and the question when the Court should better reject a request for an advisory opinion are examined.
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
The African weakly electric fish genus Campylomormyrus includes 15 described species mostly native to the Congo River and its tributaries. They are considered sympatric species, because their distribution area overlaps. These species generate species-specific electric organ discharges (EODs) varying in waveform characteristics, including duration, polarity, and phase number. They exhibit also pronounced divergence in their snout, i.e. the length, thickness, and curvature. The diversifications in these two phenotypical traits (EOD and snout) have been proposed as key factors promoting adaptive radiation in Campylomormyrus. The role of EODs as a pre-zygotic isolation mechanism driving sympatric speciation by promoting assortative mating has been examined using behavioral, genetical, and histological approaches. However, the evolutionary effects of the snout morphology and its link to species divergence have not been closely examined. Hence, the main objective of this study is to investigate the effect of snout morphology diversification and its correlated EOD to better understand their sympatric speciation and evolutionary drivers. Moreover, I aim to utilize the intragenus and intergenus hybrids of Campylomormyrus to better understand trait divergence as well as underlying molecular/genetic mechanisms involved in the radiation scenario. To this end, I utilized three different approaches: feeding behavior analysis, diet assessment, and geometric morphometrics analysis. I performed feeding behavior experiments to evaluate the concept of the phenotype-environment correlation by testing whether Campylomormyrus species show substrate preferences. The behavioral experiments showed that the short snout species exhibits preference to sandy substrate, the long snout species prefers a stone substrate, and the species with intermediate snout size does not exhibit any substrate preference. The experiments suggest that the diverse feeding apparatus in the genus Campylomormyrus may have evolved in adaptation to their microhabitats. I also performed diet assessments of sympatric Campylomormyrus species and a sister genus species (Gnathonemus petersii) with markedly different snout morphologies and EOD using NGS-based DNA metabarcoding of their stomach contents. The diet of each species was documented showing that aquatic insects such as dipterans, coleopterans and trichopterans represent the major diet component. The results showed also that all species are able to exploit diverse food niches in their habitats. However, comparing the diet overlap indices showed that different snout morphologies and the associated divergence in the EOD translated into different prey spectra. These results further support the idea that the EOD could be a ‘magic trait’ triggering both adaptation and reproductive isolation. Geometric morphometrics method was also used to compare the phenotypical shape traits of the F1 intragenus (Campylomormyrus) and intergenus (Campylomormyrus species and Gnathonemus petersii) hybrids relative to their parents. The hybrids of these species were well separated based on the morphological traits, however the hybrid phenotypic traits were closer to the short-snouted species. In addition, the likelihood that the short snout expressed in the hybrids increases with increasing the genetic distance of the parental species. The results confirmed that additive effects produce intermediate phenotypes in F1-hybrids. It seems, therefore, that morphological shape traits in hybrids, unlike the physiological traits, were not expressed straightforward.
Ribosomes decode mRNA to synthesize proteins. Ribosomes, once considered static, executing machines, are now viewed as dynamic modulators of translation. Increasingly detailed analyses of structural ribosome heterogeneity led to a paradigm shift toward ribosome specialization for selective translation. As sessile organisms, plants cannot escape harmful environments and evolved strategies to withstand. Plant cytosolic ribosomes are in some respects more diverse than those of other metazoans. This diversity may contribute to plant stress acclimation. The goal of this thesis was to determine whether plants use ribosome heterogeneity to regulate protein synthesis through specialized translation. I focused on temperature acclimation, specifically on shifts to low temperatures. During cold acclimation, Arabidopsis ceases growth for seven days while establishing the responses required to resume growth. Earlier results indicate that ribosome biogenesis is essential for cold acclimation. REIL mutants (reil-dkos) lacking a 60S maturation factor do not acclimate successfully and do not resume growth. Using these genotypes, I ascribed cold-induced defects of ribosome biogenesis to the assembly of the polypeptide exit tunnel (PET) by performing spatial statistics of rProtein changes mapped onto the plant 80S structure. I discovered that growth cessation and PET remodeling also occurs in barley, suggesting a general cold response in plants. Cold triggered PET remodeling is consistent with the function of Rei-1, a REIL homolog of yeast, which performs PET quality control. Using seminal data of ribosome specialization, I show that yeast remodels the tRNA entry site of ribosomes upon change of carbon sources and demonstrate that spatially constrained remodeling of ribosomes in metazoans may modulate protein synthesis. I argue that regional remodeling may be a form of ribosome specialization and show that heterogeneous cytosolic polysomes accumulate after cold acclimation, leading to shifts in the translational output that differs between wild-type and reil-dkos. I found that heterogeneous complexes consist of newly synthesized and reused proteins. I propose that tailored ribosome complexes enable free 60S subunits to select specific 48S initiation complexes for translation. Cold acclimated ribosomes through ribosome remodeling synthesize a novel proteome consistent with known mechanisms of cold acclimation. The main hypothesis arising from my thesis is that heterogeneous/ specialized ribosomes alter translation preferences, adjust the proteome and thereby activate plant programs for successful cold acclimation.
The role of biogenic carbonate producers in the evolution of the geometries of carbonate systems has been the subject of numerous research projects. Attempts to classify modern and ancient carbonate systems by their biotic components have led to the discrimination of biogenic carbonate producers broadly into Photozoans, which are characterised by an affinity for warm tropical waters and high dependence on light penetration, and Heterozoans which are generally associated with both cool water environments and nutrient-rich settings with little to no light penetration. These broad categories of carbonate sediment producers have also been recognised to dominate in specific carbonate systems. Photozoans are commonly dominant in flat-topped platforms with steep margins, while Heterozoans generally dominate carbonate ramps. However, comparatively little is known on how these two main groups of carbonate producers interact in the same system and impact depositional geometries responding to changes in environmental conditions such as sea level fluctuation, antecedent slope, sediment transport processes, etc. This thesis presents numerical models to investigate the evolution of Miocene carbonate systems in the Mediterranean from two shallow marine domains: 1) a Miocene flat-topped platform dominated by Photozoans, with a significant component of Hetrozoans in the slope and 2) a Heterozoan distally steepened ramp, with seagrass-influenced (Photozoan) inner ramp. The overarching aim of the three articles comprising this cumulative thesis is to provide a numerical study of the role of Photozoans and Heterozoans in the evolution of carbonate system geometries and how these biotas respond to changes in environmental conditions. This aim was achieved using stratigraphic forward modelling, which provides an approach to quantitatively integrate multi-scale datasets to reconstruct sedimentary processes and products during the evolution of a sedimentary system.
In a Photozoan-dominated carbonate system, such as the Miocene Llucmajor platform in Western Mediterranean, stratigraphic forward modelling dovetailed with a robust set of sensitivity tests reveal how the geometry of the carbonate system is determined by the complex interaction of Heterozoan and Photozoan biotas in response to variable conditions of sea level fluctuation, substrate configuration, sediment transport processes and the dominance of Photozoan over Heterozoan production. This study provides an enhanced understanding of the different carbonate systems that are possible under different ecological and hydrodynamic conditions. The research also gives insight into the roles of different biotic associations in the evolution of carbonate geometries through time and space. The results further show that the main driver of platform progradation in a Llucmajor-type system is the lowstand production of Heterozoan sediments, which form the necessary substratum for Photozoan production.
In Heterozoan systems, sediment production is mainly characterised by high transport deposits, that are prone to redistribution by waves and gravity, thereby precluding the development of steep margins. However, in the Menorca ramp, the occurrence of sediment trapping by seagrass led to the evolution of distal slope steepening. We investigated, through numerical modelling, how such a seagrass-influenced ramp responds to the frequency and amplitude of sea level changes, variable carbonate production between the euphotic and oligophotic zone, and changes in the configuration of the paleoslope. The study reinforces some previous hypotheses and presents alternative scenarios to the established concepts of high-transport ramp evolution. The results of sensitivity experiments show that steep slopes are favoured in ramps that develop in high-frequency sea level fluctuation with amplitudes between 20 m and 40 m. We also show that ramp profiles are significantly impacted by the paleoslope inclination, such that an optimal antecedent slope of about 0.15 degrees is required for the Menorca distally steepened ramp to develop.
The third part presents an experimental case to argue for the existence of a Photozoan sediment threshold required for the development of steep margins in carbonate platforms. This was carried out by developing sensitivity tests on the forward models of the flat-topped (Llucmajor) platform and the distally steepened (Menorca) platform. The results show that models with Photozoan sediment proportion below a threshold of about 40% are incapable of forming steep slopes. The study also demonstrates that though it is possible to develop steep margins by seagrass sediment trapping, such slopes can only be stabilized by the appropriate sediment fabric and/or microbial binding. In the Photozoan-dominated system, the magnitude of slope steepness depends on the proportion of Photozoan sediments in the system. Therefore, this study presents a novel tool for characterizing carbonate systems based on their biogenic components.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Mitochondria and plastids are organelles with an endosymbiotic origin. During evolution, many genes are lost from the organellar genomes and get integrated in the nuclear genome, in what is known as intracellular/endosymbiotic gene transfer (IGT/EGT). IGT has been reproduced experimentally in Nicotiana tabacum at a gene transfer rate (GTR) of 1 event in 5 million cells, but, despite its centrality to eukaryotic evolution, there are no genetic factors known to influence the frequency of IGT in higher eukaryotes. The focus of this work was to determine the role of different DNA repair pathways of double strand break repair (DSBR) in the integration step of organellar DNA in the nuclear genome during IGT. Here, a CRISPR/Cas9 mutagenesis strategy was implemented in N. tabacum, with the aim of generating mutants in nuclear genes without expected visible phenotypes. This strategy led to the generation of a collection of independent mutants in the LIG4 (necessary for non-homologous end joining, NHEJ) and POLQ genes (necessary for microhomology mediated end joining, MMEJ). Targeting of other DSBR genes (KU70, KU80, RPA1C) generated mutants with unexpectedly strong developmental phenotypes.. These factors have telomeric roles, hinting towards a possible relationship between telomere length, and strength of developmental disruption upon loss of telomere structure in plants. The mutants were made in a genetic background encoding a plastid-encoded IGT reporter, that confers kanamycin resistance upon transfer to the nucleus. Through large scale independent experiments, increased IGT from the chloroplast to the nucleus was observed in lig4 mutants, as well as lines encoding a POLQ gene with a defective polymerase domain (polqΔPol). This shows that NHEJ or MMEJ have a double-sided relationship with IGT: while transferred genes may integrate using either pathway, the presence of both pathways suppresses IGT in wild-type somatic cells, thus demonstrating for the first time the extent on which nuclear genes control IGT frequency in plants. The IGT frequency increases in the mutants are likely mediated by increased availability of double strand breaks for integration. Additionally, kinetic analysis reveals that gene transfer (GT) events accumulate linearly as a function of time spent under antibiotic selection in the experiment, demonstrating that, contrary to what was previously thought, there is no such thing as a single GTR in somatic IGT experiments. Furthermore, IGT in tissue culture experiments appears to be the result of a "race against the clock" for integration in the nuclear genome, that starts when the organellar DNA arrives to the nucleus granting transient antibiotic resistance. GT events and escapes of kanamycin selection may be two possible outcomes from this race: those instances where the organellar DNA gets to integrate are recovered as GT events, and in those cases where timely integration fails, antibiotic resistance cannot be sustained, and end up considered as escapes. In the mutants, increased opportunities for integration in the nuclear genome change the overall ratio between IGT and escape events. The resources generated here are promising starting points for future research: (1) the mutant collection, for the further study of processes that depend on DNA repair in plants (2) the collection of GT lines obtained from these experiments, for the study of the effect of DSBR pathways over integration patterns and stability of transferred genes and (3) the developed CRISPR/Cas9 workflow for mutant generation, to make N. tabacum meet its potential as an attractive model for answering complex biological questions.
The G protein-coupled estrogen receptor (GPER1) is acknowledged as an important mediator of estrogen signaling. Given the ubiquitous expression of GPER1, it is likely that the receptor plays a role in a variety of malignancies, not only in the classic hormonally regulated tissues (e.g., breast, ovary, and prostate), but also in the colon. As colorectal cancer (CRC) is the third most common cancer in both men and women worldwide and environmental factors and dietary habits are important risk factors, it is increasingly recognized that natural and synthetic hormones and their associated receptors might play a role in CRC. Through oral consumption, environmental contaminants with endocrine activity are in contact with the gastrointestinal mucosa, where they might exert their toxic effects. Although GPER1 has been shown to be engaged in physiological and pathophysiological processes, its role in CRC remains poorly understood. Thus, pro- as well as anti-tumorigenic effects are described in the literature. This thesis has uncovered novel roles of GPER1 in mediating major CRC-associated phenotypes in transformed and non-transformed colon cell lines. Exposure to the estrogens 17β-estradiol (E2), bisphenol-A (BPA) and diethylstilbestrol (DES) but also the androgen dihydrotestosterone (DHT) resulted in GPER1-dependent induction of supernumerary centrosomes, whole chromosomal instability (w-CIN) and aneuploidy. Indeed, both knockdown and inhibition of GPER1 attenuated the generation of (xeno)hormone-driven supernumerary centrosomes and karyotype instability. Mechanistically, (xeno)hormone-induced centrosome amplification was associated with transient multipolar mitosis and the generation of so called anaphase “lagging” chromosomes. The results of this thesis propose a GPER1/PKA/AKAP9-pathway in regulating centrosome numbers in colorectal cancer cells and the involvement of the centriolar protein centrin. Remarkably, exposure to (xeno)hormones resulted in atypical enlargement and unexpected phosphorylation of the centriole marker centrin in interphase. These findings provide a novel role for GPER1 in key CRC-prone lesions and shed light on underlying mechanisms that involve GPER1 function in the colon. Elucidating to what extent centrosomal proteins are involved in the GPER1-mediated aneugenic effect will be an important task for future studies. The present study was intended to lay a first foundation to understand the molecular basis and potential risk factors of CRC which might help to reduce the use of laboratory animals. Since numerous animal experiments are conducted in biomedical research, the development of alternative methods is indispensable. The Federal Institute for Risk Assessment (BfR) as the German Center for the Protection of Laboratory Animals (Bf3R) addresses this issue by uncovering underlying mechanisms leading to colorectal cancer as necessary prerequisite in order to develop alternative methods.
Photosynthesis converts light into metabolic energy which fuels plant growth. In nature, many factors influence light availability for photosynthesis on different time scales, from shading by leaves within seconds up to seasonal changes over months. Variability of light energy supply for photosynthesis can limit a plant´s biomass accumulation. Plants have evolved multiple strategies to cope with strongly fluctuation light (FL). These range from long-term optimization of leaf morphology and physiology and levels of pigments and proteins in a process called light acclimation, to rapid changes in protein activity within seconds. Therefore, uncovering how plants deal with FL on different time scales may provide key ideas for improving crop yield. Photosynthesis is not an isolated process but tightly integrates with metabolism through mutual regulatory interactions. We thus require mechanistic understanding of how long-term light acclimation shapes both, dynamic photosynthesis and its interactions with downstream metabolism. To approach this, we analyzed the influence of growth light on i) the function of known rapid photosynthesis regulators KEA3 and VCCN1 in dynamic photosynthesis (Chapter 2-3) and ii) the interconnection of photosynthesis with photorespiration (PR; Chapter 4).
We approached topic (i) by quantifying the effect of different growth light regimes on photosynthesis and photoprotection by using kea3 and vccn1 mutants. Firstly, we found that, besides photosynthetic capacity, the activities of VCCN1 and KEA3 during a sudden high light phase also correlated with growth light intensity. This finding suggests regulation of both proteins by the capacity of downstream metabolism. Secondly, we showed that KEA3 accelerated photoprotective non-photochemical quenching (NPQ) kinetics in two ways: Directly via downregulating the lumen proton concentration and thereby de-activating pH-dependent NPQ, and indirectly via suppressing accumulation of the photoprotective pigment zeaxanthin.
For topic (ii), we analyzed the role of PR, a process which recycles a toxic byproduct of the carbon fixation reactions, in metabolic flexibility in a dynamically changing light environment. For this we employed the mutants hpr1 and ggt1 with a partial block in PR. We characterized the function of PR during light acclimation by tracking molecular and physiological changes of the two mutants. Our data, in contrast to previous reports, disprove a generally stronger physiological relevance of PR under dynamic light conditions. Additionally, the two different mutants showed pronounced and distinct metabolic changes during acclimation to a condition inducing higher photosynthetic activity. This underlines that PR cannot be regarded purely as a cyclic detoxification pathway for 2PG. Instead, PR is highly interconnected with plant metabolism, with GGT1 and HPR1 representing distinct metabolic modulators.
In summary, the presented work provides further insight into how energetic and metabolic flexibility is ensured by short-term regulators and PR during long-term light acclimation.
The light reactions of photosynthesis are carried out by a series of multiprotein complexes embedded in thylakoid membranes. Among them, photosystem I (PSI), acting as plastocyanin-ferderoxin oxidoreductase, catalyzes the final reaction. Together with light-harvesting antenna I, PSI forms a high-molecular-weight supercomplex of ~600 kDa, consisting of eighteen subunits and nearly two hundred co-factors. Assembly of the various components into a functional thylakoid membrane complex requires precise coordination, which is provided by the assembly machinery. Although this includes a small number of proteins (PSI assembly factors) that have been shown to play a role in the formation of PSI, the process as a whole, as well as the intricacy of its members, remains largely unexplored.
In the present work, two approaches were used to find candidate PSI assembly factors. First, EnsembleNet was used to select proteins thought to be functionally related to known PSI assembly factors in Arabidopsis thaliana (approach I), and second, co-immunoprecipitation (Co-IP) of tagged PSI assembly factors in Nicotiana tabacum was performed (approach II).
Here, the novel PSI assembly factors designated CO-EXPRESSED WITH PSI ASSEMBLY 1 (CEPA1) and Ycf4-INTERACTING PROTEIN 1 (Y4IP1) were identified. A. thaliana null mutants for CEPA1 and Y4IP1 showed a growth phenotype and pale leaves compared with the wild type. Biophysical experiments using pulse amplitude modulation (PAM) revealed insufficient electron transport on the PSII acceptor side. Biochemical analyses revealed that both CEPA1 and Y4IP1 are specifically involved in PSI accumulation in A. thaliana at the post-translational level but are not essential. Consistent with their roles as factors in the assembly of a thylakoid membrane protein complex, the two proteins localize to thylakoid membranes. Remarkably, cepa1 y4ip1 double mutants exhibited lethal phenotypes in early developmental stages under photoautotrophic growth. Finally, co-IP and native gel experiments supported a possible role for CEPA1 and Y4IP1 in mediating PSI assembly in conjunction with other PSI assembly factors (e.g., PPD1- and PSA3-CEPA1 and Ycf4-Y4IP1). The fact that CEPA1 and Y4IP1 are found exclusively in green algae and higher plants suggests eukaryote-specific functions. Although the specific mechanisms need further investigation, CEPA1 and Y4IP1 are two novel assembly factors that contribute to PSI formation.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.
The musculoskeletal system provides support and enables movement to the body, and its deterioration is a crucial aspect of age-related functional decline. Mesenchymal stromal cells (MSCs) play an important role in musculoskeletal homeostasis due to their broad differentiation potentials and their ability to support osteogenic and myogenic tissue maintenance and regeneration. In the bone, MSCs differentiate either into osteochondrogenic progenitors to form osteocytes and chondrocytes, or increasingly with age into adipogenic progenitors which give rise to bone-resident adipocytes. In skeletal muscle, during healthy regeneration MSCs provide regulatory signals that activate local, tissue-specific stem cells, known as satellite cells, which regenerate contractile myofibres. This process involves a significant cross-talk to immune cells stemming from both lymphoid and myeloid lineages. During ageing, muscle-resident MSCs undergo increased adipogenic lineage commitment, causing niche changes that contribute to fatty infiltration in muscles. These shifts in cell populations in bone lead to the loss of osteogenic cells and subsequently osteoporosis, or in muscle to impaired regeneration and to the development of sarcopenia. However, the signals that drive transition of MSCs into their respective cellular fates remain elusive.
This thesis aims to elucidate the transcriptional shifts modulating cell states and cell types in musculoskeletal MSC fate determination. Single-cell RNA-sequencing (scRNA-seq) was used to characterise cell type-specific transcript regulation. State-of-the-art bioinformatics tools were combined with different analytical platforms that include both droplet-based scRNA-seq for large heterogeneous populations, and microfluidics-based scRNA-seq to assess small, rare subpopulations. For each platform, distinct computational pipelines were established including filtering steps to exclude low-quality cells, and data visualisation was performed by dimensionality reduction. Downstream analysis included clustering, cell type annotation, and differential gene expression to investigate transcriptional states in defined cell types during ageing and injury in the muscle and bone. Finally, a novel tool to assess publication activities in defined areas of research for the identified marker genes was developed.
The results in the bone indicate that ageing MSCs increasingly commit towards an adipogenic fate at the expense of osteogenic specialisation. The data also suggests that significant cell population shifts of MSC-type fibro-adipogenic progenitors during muscle ageing underlie the pathologies observed in homeostatic and post-injury regenerative conditions. High-throughput visualisation of publication activity for candidate genes enabled more effective biological evaluation of scRNA-seq data. These results expose critical age-related changes in the stem cell niches of skeletal muscle and bone, highlight their respective sensitivity to nutrition and pathology, and elucidate novel factors that modulate stem cell-based regeneration. Targeting these processes might improve musculoskeletal health in the context of ageing and prevent the negative effects of pathological lineage determination.
Insight by de—sign
(2023)
The calculus of design is a diagrammatic approach towards the relationship between design and insight. The thesis I am evolving is that insights are not discovered, gained, explored, revealed, or mined, but are operatively de—signed. The de in design neglects the contingency of the space towards the sign. The — is the drawing of a distinction within the operation. Space collapses through the negativity of the sign; the command draws a distinction that neglects the space for the form's sake. The operation to de—sign is counterintuitively not the creation of signs, but their removal, the exclusion of possible sign propositions of space. De—sign is thus an act of exclusion; the possibilities of space are crossed into form.
Many complex systems that we encounter in the world can be formalized using networks. Consequently, they have been in the focus of computer science for decades, where algorithms are developed to understand and utilize these systems.
Surprisingly, our theoretical understanding of these algorithms and their behavior in practice often diverge significantly. In fact, they tend to perform much better on real-world networks than one would expect when considering the theoretical worst-case bounds. One way of capturing this discrepancy is the average-case analysis, where the idea is to acknowledge the differences between practical and worst-case instances by focusing on networks whose properties match those of real graphs. Recent observations indicate that good representations of real-world networks are obtained by assuming that a network has an underlying hyperbolic geometry.
In this thesis, we demonstrate that the connection between networks and hyperbolic space can be utilized as a powerful tool for average-case analysis. To this end, we first introduce strongly hyperbolic unit disk graphs and identify the famous hyperbolic random graph model as a special case of them. We then consider four problems where recent empirical results highlight a gap between theory and practice and use hyperbolic graph models to explain these phenomena theoretically. First, we develop a routing scheme, used to forward information in a network, and analyze its efficiency on strongly hyperbolic unit disk graphs. For the special case of hyperbolic random graphs, our algorithm beats existing performance lower bounds. Afterwards, we use the hyperbolic random graph model to theoretically explain empirical observations about the performance of the bidirectional breadth-first search. Finally, we develop algorithms for computing optimal and nearly optimal vertex covers (problems known to be NP-hard) and show that, on hyperbolic random graphs, they run in polynomial and quasi-linear time, respectively.
Our theoretical analyses reveal interesting properties of hyperbolic random graphs and our empirical studies present evidence that these properties, as well as our algorithmic improvements translate back into practice.
Laser cutting is a fast and precise fabrication process. This makes laser cutting a powerful process in custom industrial production. Since the patents on the original technology started to expire, a growing community of tech-enthusiasts embraced the technology and started sharing the models they fabricate online. Surprisingly, the shared models appear to largely be one-offs (e.g., they proudly showcase what a single person can make in one afternoon). For laser cutting to become a relevant mainstream phenomenon (as opposed to the current tech enthusiasts and industry users), it is crucial to enable users to reproduce models made by more experienced modelers, and to build on the work of others instead of creating one-offs.
We create a technological basis that allows users to build on the work of others—a progression that is currently held back by the use of exchange formats that disregard mechanical differences between machines and therefore overlook implications with respect to how well parts fit together mechanically (aka engineering fit).
For the field to progress, we need a machine-independent sharing infrastructure.
In this thesis, we outline three approaches that together get us closer to this:
(1) 2D cutting plans that are tolerant to machine variations. Our initial take is a minimally invasive approach: replacing machine-specific elements in cutting plans with more tolerant elements using mechanical hacks like springs and wedges. The resulting models fabricate on any consumer laser cutter and in a range of materials.
(2) sharing models in 3D. To allow building on the work of others, we build a 3D modeling environment for laser cutting (kyub). After users design a model, they export their 3D models to 2D cutting plans optimized for the machine and material at hand. We extend this volumetric environment with tools to edit individual plates, allowing users to leverage the efficiency of volumetric editing while having control over the most detailed elements in laser-cutting (plates)
(3) converting legacy 2D cutting plans to 3D models. To handle legacy models, we build software to interactively reconstruct 3D models from 2D cutting plans. This allows users to reuse the models in more productive ways. We revisit this by automating the assembly process for a large subset of models.
The above-mentioned software composes a larger system (kyub, 140,000 lines of code). This system integration enables the push towards actual use, which we demonstrate through a range of workshops where users build complex models such as fully functional guitars. By simplifying sharing and re-use and the resulting increase in model complexity, this line of work forms a small step to enable personal fabrication to scale past the maker phenomenon, towards a mainstream phenomenon—the same way that other fields, such as print (postscript) and ultimately computing itself (portable programming languages, etc.) reached mass adoption.
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
Hantaviruses (HVs) are a group of zoonotic viruses that infect human beings primarily through aerosol transmission of rodent excreta and urine samplings. HVs are classified geographically into: Old World HVs (OWHVs) that are found in Europe and Asia, and New World HVs (NWHVs) that are observed in the Americas. These different strains can cause severe hantavirus diseases with pronounced renal syndrome or severe cardiopulmonary system distress. HVs can be extremely lethal, with NWHV infections reaching up to 40 % mortality rate. HVs are known to generate epidemic outbreaks in many parts of the world including Germany, which has seen periodic HV infections over the past decade. HV has a trisegmented genome. The small segment (S) encodes the nucleocapsid protein (NP), the middle segment (M) encodes the glycoproteins (GPs) Gn and Gc which forms up to tetramers and primarily monomers \& dimers upon independent expression respectively and large segment (L) encodes RNA dependent RNA polymerase (RdRp). Interactions between these viral proteins are crucial in providing mechanistic insights into HV virion development. Despite best efforts, there continues to be lack of quantification of these associations in living cells. This is required in developing the mechanistic models for HV viral assembly. This dissertation focuses on three key questions pertaining to the initial steps of virion formation that primarily involves the GPs and NP.
The research investigations in this work were completed using Fluorescence Correlation Spectroscopy (FCS) approaches. FCS is frequently used in assessing the biophysical features of bio-molecules including protein concentration and diffusion dynamics and circumvents the requirement of protein overexpression. FCS was primarily applied in this thesis to evaluate protein multimerization, at single cell resolution.
The first question addressed which GP spike formation model proposed by Hepojoki et al.(2010) appropriately describes the evidence in living cells. A novel in cellulo assay was developed to evaluate the amount of fluorescently labelled and unlabeled GPs upon co-expression. The results clearly showed that Gn and Gc initially formed a heterodimeric Gn:Gc subunit. This sub-unit then multimerizes with congruent Gn:Gc subunits to generate the final GP spike. Based on these interactions, models describing the formation of GP complex (with multiple GP spike subunits) were additionally developed.
HV GP assembly primarily takes place in the Golgi apparatus (GA) of infected cells. Interestingly, NWHV GPs are hypothesized to assemble at the plasma membrane (PM). This led to the second research question in this thesis, in which a systematic comparison between OWHV and NWHV GPs was conducted to validate this hypothesis. Surprisingly, GP localization at the PM was congruently observed with OWHV and NWHV GPs. Similar results were also discerned with OWHV and NWHV GP localization in the absence of cytoskeletal factors that regulate HV trafficking in cells.
The final question focused on quantifying the NP-GP interactions and understanding their influence of NP and GP multimerization. Gc mutlimers were detected in the presence of NP and complimented by the presence of localized regions of high NP-Gc interactions in the perinuclear region of living cells. Gc-CT domain was shown to influence NP-Gc associations. Gn, on the other hand, formed up to tetrameric complexes, independent from the presence of NP.
The results in this dissertation sheds light on the initial steps of HV virion formation by quantifying homo and heterotypic interactions involving NP and GPs, which otherwise are very difficult to perform. Finally, the in cellulo methodologies implemented in this work can be potentially extended to understand other key interactions involved in HV virus assembly.
Pichia pastoris (syn. Komagataella phaffi) is a distinguished expression system widely used in industrial production processes. Recent molecular research has focused on numerous approaches to increase recombinant protein yield in P. pastoris. For example, the design of expression vectors and synthetic genetic elements, gene copy number optimization, or co-expression of helper proteins
(transcription factors, chaperones, etc.). However, high clonal variability of transformants and low screening throughput have hampered significant success.
To enhance screening capacities, display-based methodologies inherit the potential for efficient isolation of producer clones via fluorescence-activated cell sorting (FACS). Therefore, this study focused on developing a novel clone selection method that is based on the non-covalent attachment of Fab fragments on the P. pastoris cell surface to be applicable for FACS.
Initially, a P. pastoris display system was developed, which is a prerequisite for the surface capture of secreted Fabs. A Design of Experiments approach was applied to analyze the influence of various genetic elements on antibody fragment display. The combined P. pastoris formaldehyde dehydrogenase promoter (PFLD1), Saccharomyces cerevisiae invertase 2 signal peptide (ScSUC2), - agglutinin (ScSAG1) anchor protein, and the ARS of Kluyveromyces lactis (panARS) conferred highest display levels.
Subsequently, eight single-chain variable fragments (scFv) specific for the constant part of the Fab heavy or light chain were individually displayed in P. pastoris. Among the tested scFvs, the anti-human CH1 IgG domain scFv allowed the most efficient Fab capture detected by flow cytometry.
Irrespective of the Fab sequence, exogenously added as well as simultaneously secreted Fabs were successfully captured on the cell surface. Furthermore, Fab secretion capacities were shown to correlate to the level of surface-bound Fabs as demonstrated for characterized producer clones.
Flow-sorted clones presenting high amounts of Fabs showed an increase in median Fab titers (factor of 21 to 49) compared to unsorted clones when screened in deep-well plates. For selected candidates, improved functional Fab yields of sorted cells vs. unsorted cells were confirmed in an upscaled shake flask production. Since the scFv capture matrix was encoded on an episomal plasmid with inherently unstable autonomously replicating sequences (ARS), efficient plasmid curing was observed after removing the selective pressure. Hence, sorted clones could be immediately used for production without the need to modify the expression host or vector. The resulting switchable display/secretion system provides a streamlined approach for the isolation of Fab producers and subsequent Fab production.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Soil is today considered a non-renewable resource on societal time scale, as the rate of soil loss is higher than the one of soil formation.
Soil formation is complex, can take several thousands of years and is influenced by a variety of factors, one of them is time. Oftentimes, there is the assumption of constant and progressive conditions for soil and/or profile development (i.e., steady-state). In reality, for most of the soils, their (co-)evolution leads to a complex and irregular soil development in time and space characterised by “progressive” and “regressive” phases.
Lateral transport of soil material (i.e., soil erosion) is one of the principal processes shaping the land surface and soil profile during “regressive” phases and one of the major environmental problems the world faces.
Anthropogenic activities like agriculture can exacerbate soil erosion. Thus, it is of vital importance to distinguish short-term soil redistribution rates (i.e., within decades) influenced by human activities differ from long-term natural rates. To do so, soil erosion (and denudation) rates can be determined by using a set of isotope methods that cover different time scales at landscape level.
With the aim to unravel the co-evolution of weathering, soil profile development and lateral redistribution on a landscape level, we used Pluthonium-239+240 (239+240Pu), Beryllium-10 (10Be, in situ and meteoric) and Radiocarbon (14C) to calculate short- and long-term erosion rates in two settings, i.e., a natural and an anthropogenic environment in the hummocky ground moraine landscape of the Uckermark, North-eastern Germany. The main research questions were:
1. How do long-term and short-term rates of soil redistributing processes differ?
2. Are rates calculated from in situ 10Be comparable to those of using meteoric 10Be?
3. How do soil redistribution rates (short- and long-term) in an agricultural and in a natural landscape compare to each other?
4. Are the soil patterns observed in northern Germany purely a result of past events (natural and/or anthropogenic) or are they imbedded in ongoing processes?
Erosion and deposition are reflected in a catena of soil profiles with no or almost no erosion on flat positions (hilltop), strong erosion on the mid-slope and accumulation of soil material at the toeslope position. These three characteristic process domains were chosen within the CarboZALF-D experimental site, characterised by intense anthropogenic activities. Likewise, a hydrosequence in an ancient forest was chosen for this study and being regarded as a catena strongly influenced by natural soil transport.
The following main results were obtained using the above-mentioned range of isotope methods available to measure soil redistribution rates depending on the time scale needed (e.g., 239+240Pu, 10Be, 14C):
1. Short-term erosion rates are one order of magnitude higher than long-term rates in agricultural settings.
2. Both meteoric and in situ 10Be are suitable soil tracers to measure the long-term soil redistribution rates giving similar results in an anthropogenic environment for different landscape positions (e.g., hilltop, mid-slope, toeslope)
3. Short-term rates were extremely low/negligible in a natural landscape and very high in an agricultural landscape – -0.01 t ha-1 yr-1 (average value) and -25 t ha-1 yr-1 respectively. On the contrary, long-term rates in the forested landscape are comparable to those calculated in the agricultural area investigated with average values of -1.00 t ha-1 yr-1 and -0.79 t ha-1 yr-1.
4. Soil patterns observed in the forest might be due to human impact and activities started after the first settlements in the region, earlier than previously postulated, between 4.5 and 6.8 kyr BP, and not a result of recent soil erosion.
5. Furthermore, long-term soil redistribution rates are similar independently from the settings, meaning past natural soil mass redistribution processes still overshadow the present anthropogenic erosion processes.
Overall, this study could make important contributions to the deciphering of the co-evolution of weathering, soil profile development and lateral redistribution in North-eastern Germany. The multi-methodological approach used can be challenged by the application in a wider range of landscapes and geographic regions.
Unveiling the Local Universe
(2023)
Development of electrochemical antibody-based and enzymatic assays for mycotoxin analysis in food
(2023)
Electrochemical methods are promising to meet the demand for easy-to-use devices monitoring key parameters in the food industry. Many companies run own lab procedures for mycotoxin analysis, but it is a major goal to simplify the analysis. The enzyme-linked immunosorbent assay using horseradish peroxidase as enzymatic label, together with 3,3',5,5' tetramethylbenzidine (TMB)/H2O2 as substrates allows sensitive mycotoxin detection with optical detection methods. For the miniaturization of the detection step, an electrochemical system for mycotoxin analysis was developed. To this end, the electrochemical detection of TMB was studied by cyclic voltammetry on different screen-printed electrodes (carbon and gold) and at different pH values (pH 1 and pH 4). A stable electrode reaction, which is the basis for the further construction of the electrochemical detection system, could be achieved at pH 1 on gold electrodes. An amperometric detection method for oxidized TMB, using a custom-made flow cell for screen-printed electrodes, was established and applied for a competitive magnetic bead-based immunoassay for the mycotoxin ochratoxin A. A limit of detection of 150 pM (60 ng/L) could be obtained and the results were verified with optical detection. The applicability of the magnetic bead-based immunoassay was tested in spiked beer using a handheld potentiostat connected via Bluetooth to a smartphone for amperometric detection allowing to quantify ochratoxin A down to 1.2 nM (0.5 µg/L).
Based on the developed electrochemical detection system for TMB, the applicability of the approach was demonstrated with a magnetic bead-based immunoassay for the ergot alkaloid, ergometrine. Under optimized assay conditions a limit of detection of 3 nM (1 µg/L) was achieved and in spiked rye flour samples ergometrine levels in a range from 25 to 250 µg/kg could be quantified. All results were verified with optical detection. The developed electrochemical detection method for TMB gives great promise for the detection of TMB in many other HRP-based assays.
A new sensing approach, based on an enzymatic electrochemical detection system for the mycotoxin fumonisin B1 was established using an Aspergillus niger fumonisin amine oxidase (AnFAO). AnFAO was produced recombinantly in E. coli as maltose-binding protein fusion protein and catalyzes the oxidative deamination of fumonisins, producing hydrogen peroxide. It was found that AnFAO has a high storage and temperature stability. The enzyme was coupled covalently to magnetic particles, and the enzymatically produced H2O2 in the reaction with fumonisin B1 was detected amperometrically in a flow injection system using Prussian blue/carbon electrodes and the custom-made wall-jet flow cell. Fumonisin B1 could be quantified down to 1.5 µM (≈ 1 mg/L). The developed system represents a new approach to detect mycotoxins using enzymes and electrochemical methods.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Extreme weather and climate events are one of the greatest dangers for present-day society. Therefore, it is important to provide reliable statements on what changes in extreme events can be expected along with future global climate change. However, the projected overall response to future climate change is generally a result of a complex interplay between individual physical mechanisms originated within the different climate subsystems. Hence, a profound understanding of these individual contributions is required in order to provide meaningful assessments of future changes in extreme events. One aspect of climate change is the recently observed phenomenon of Arctic Amplification and the related dramatic Arctic sea ice decline, which is expected to continue over the next decades. The question to what extent Arctic sea ice loss is able to affect atmospheric dynamics and extreme events over mid-latitudes has received a lot of attention over recent years and still remains a highly debated topic.
In this respect, the objective of this thesis is to contribute to a better understanding on the impact of future Arctic sea ice retreat on European temperature extremes and large-scale atmospheric dynamics.
The outcomes are based on model data from the atmospheric general circulation model ECHAM6. Two different sea ice sensitivity simulations from the Polar Amplification Intercomparison Project are employed and contrasted to a present day reference experiment: one experiment with prescribed future sea ice loss over the entire Arctic, as well as another one with sea ice reductions only locally prescribed over the Barents-Kara Sea.% prescribed over the entire Arctic, as well as only locally over the Barent/Karasea with a present day reference experiment.
The first part of the thesis focuses on how future Arctic sea ice reductions affect large-scale atmospheric dynamics over the Northern Hemisphere in terms of occurrence frequency changes of five preferred Euro-Atlantic circulation regimes. When compared to circulation regimes computed from ERA5 it shows that ECHAM6 is able to realistically simulate the regime structures. Both ECHAM6 sea ice sensitivity experiments exhibit similar regime frequency changes. Consistent with tendencies found in ERA5, a more frequent occurrence of a Scandinavian blocking pattern in midwinter is for instance detected under future sea ice conditions in the sensitivity experiments. Changes in occurrence frequencies of circulation regimes in summer season are however barely detected.
After identifying suitable regime storylines for the occurrence of European temperature extremes in winter, the previously detected regime frequency changes are used to quantify dynamically and thermodynamically driven contributions to sea ice-induced changes in European winter temperature extremes.
It is for instance shown how the preferred occurrence of a Scandinavian blocking regime under low sea ice conditions dynamically contributes to more frequent midwinter cold extreme occurrences over Central Europe. In addition, a reduced occurrence frequency of a Atlantic trough regime is linked to reduced winter warm extremes over Mid-Europe. Furthermore, it is demonstrated how the overall thermodynamical warming effect due to sea ice loss can result in less (more) frequent winter cold (warm) extremes, and consequently counteracts the dynamically induced changes.
Compared to winter season, circulation regimes in summer are less suitable as storylines for the occurrence of summer heat extremes.
Therefore, an approach based on circulation analogues is employed in order to quantify thermodyamically and dynamically driven contributions to sea ice-induced changes of summer heat extremes over three different European sectors. Reduced occurrences of blockings over Western Russia are detected in the ECHAM6 sea ice sensitivity experiments; however, arguing for dynamically and thermodynamically induced contributions to changes in summer heat extremes remains rather challenging.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
The urge of light utilization in fabrication of materials is as encouraging as challenging. Steadily increasing energy consumption in accordance with rapid population growth, is requiring a corresponding solution within the same rate of occurrence speed. Therefore, creating, designing and manufacturing materials that can interact with light and in further be applicable as well as disposable in photo-based applications are very much under attention of researchers. In the era of sustainability for renewable energy systems, semiconductor-based photoactive materials have received great attention not only based on solar and/or hydrocarbon fuels generation from solar energy, but also successful stimulation of photocatalytic reactions such as water splitting, pollutant degradation and organic molecule synthesisThe turning point had been reached for water splitting with an electrochemical cell consisting of TiO2-Pt electrode illuminated by UV light as energy source rather than an external voltage, that successfully pursued water photolysis by Fujishima and Honda in 1972. Ever since, there has been a great deal of interest in research of semiconductors (e.g. metal oxide, metal-free organic, noble-metal complex) exhibiting effective band gap for photochemical reactions. In the case of environmental friendliness, toxicity of metal-based semiconductors brings some restrictions in possible applications. Regarding this, very robust and ‘earth-abundant’ organic semiconductor, graphitic carbon nitride has been synthesized and successfully applied in photoinduced applications as novel photocatalyst. Properties such as suitable band gap, low charge carrier recombination and feasibility for scaling up, pave the way of advance combination with other catalysts to gather higher photoactivity based on compatible heterojunction.
This dissertation aims to demonstrate a series of combinations between organic semiconductor g-CN and polymer materials that are forged through photochemistry, either in synthesis or in application. Fabrication and design processes as well as applications performed in accordance to the scope of thesis will be elucidated in detail. In addition to UV light, more attention is placed on visible light as energy source with a vision of more sustainability and better scalability in creation of novel materials and solar energy based applications.
Lithium-ion capacitors (LICs) are promising energy storage devices by asymmetrically combining anode with a high energy density close to lithium-ion batteries and cathode with a high power density and long-term stability close to supercapacitors. For the further improvement of LICs, the development of electrode materials with hierarchical porosity, nitrogen-rich lithiophilic sites, and good electrical conductivity is essential. Nitrogen-rich all-carbon composite hybrids are suitable for these conditions along with high stability and tunability, resulting in a breakthrough to achieve the high performance of LICs. In this thesis, two different all-carbon composites are suggested to unveil how the pore structure of lithiophilic composites influences the properties of LICs. Firstly, the composite with 0-dimensional zinc-templated carbon (ZTC) and hexaazatriphenylene-hexacarbonitrile (HAT) is examined how the pore structure is connected to Li-ion storage property as LIC electrode. As the pore structure of HAT/ZTC composite is easily tunable depending on the synthetic factor and ratio of each component, the results will allow deeper insights into Li-ion dynamics in different porosity, and low-cost synthesis by optimization of the HAT:ZTC ratio. Secondly, the composite with 1-dimensional nanoporous carbon fiber (ACF) and cost-effective melamine is proposed as a promising all-carbon hybrid for large-scale application. Since ACF has ultra-micropores, the numerical structure-property relationships will be calculated out not only from total pore volume but more specifically from ultra-micropore volume. From these results above, it would be possible to understand how hybrid all-carbon composites interact with lithium ions in nanoscale as well as how structural properties affect the energy storage performance. Based on this understanding derived from the simple materials modeling, it will provide a clue to design the practical hybrid materials for efficient electrodes in LICs.
The collaboration-based professional development approach Lesson Study (LS), which has its roots in the Japanese education system, has gained international recognition over the past three decades and spread quickly throughout the world. LS is a collaborative method to professional development (PD) that incorporates multiple characteristics that have been identified in the research literature as key to effective PD. Specifically, LS is a long-term process that consists of subsequent inquiry cycles, it is site-based and integrated in teachers’ practice, it encourages collaboration and reflection, places a strong emphasis on student learning, and it typically involves external experts that support the process or offer additional insights.
As LS integrates all these characteristics, it has rapidly gained international popularity since the turn of the 21st century and is currently being practiced in over 40 countries around the world. This international borrowing of the idea of LS to new national contexts has given rise to a research field that aims to investigate the effectiveness of LS on teacher learning as well as the circumstances and mechanisms that make LS effective in various settings around the world. Such research is important, as borrowing educational innovations and adapting them to new contexts can be a challenging process. Educational innovations that fail to deliver the expected outcomes tend to be abandoned prematurely and before they have been completely understood or a substantial research base has been established.
In order to prevent LS from early abandonment, Lewis and colleagues outlined three critical research needs in 2006, not long after LS was initially introduced to the United States. These research needs included (1) developing a descriptive knowledge base on LS, (2) examining the mechanisms by which teachers learn through LS, and (3) using design-based research cycles to analyze and improve LS.
This dissertation set out to take stock of the progress that has been made on these research needs over the past 20 years. The scoping review conducted for the framework of this dissertation indicates that, while a large and international knowledge base has been developed, the field has not yet produced reliable evidence of the effectiveness of LS. Based on the scoping review, this dissertation makes the case that Lewis et al.’s (2006) critical research needs should be updated. In order to do so, a number of limitations to the current knowledge base on LS need to be addressed. These limitations include (1) the frequent lack of comparable and replicable descriptions of the LS intervention in publications, (2) the incoherent use or lack of use of theoretical frameworks to explain teacher learning through LS, (3) the inconsistent use of terminology and concepts, and (4) the lack of scientific rigor in research studies and of established ways or tools to measure the effectiveness of LS.
This dissertation aims to advance the critical research needs in the field by examining the extent and nature of these limitations in three research studies. The focus of these studies lies on the LS stages of observation and reflection, as these stages have a high potential to facilitate teacher learning. The first study uses a mixed-method design to examine how teachers at German primary schools reflect critically together. The study derives a theory-based definition of critical and collaborative reflection in order to re-frame the reflection element in LS.
The second study, a systematic review of 129 articles on LS, assess how transparent research articles are in reporting how teachers observed and reflected together. In addition, it is investigated whether these articles provide any kind of theorization for the stages of observation and reflection.
The third study proposes a conceptual model for the field of LS that is based on existing models of continuous professional development and research findings on team effectiveness and collaboration. The model describes the dimensions of input, mediating mechanisms, and outcomes in order to provide a conceptual grid to teachers’ continuous professional development through LS.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Advances in hydrogravimetry
(2023)
The interest of the hydrological community in the gravimetric method has steadily increased within the last decade. This is reflected by numerous studies from many different groups with a broad range of approaches and foci. Many of those are traditionally rather hydrology-oriented groups who recognized gravimetry as a potential added value for their hydrological investigations. While this resulted in a variety of interesting and useful findings, contributing to extend the respective knowledge and confirming the methodological potential, on the other hand, many interesting and unresolved questions emerged.
This thesis manifests efforts, analyses and solutions carried out in this regard. Addressing and evaluating many of those unresolved questions, the research contributes to advancing hydrogravimetry, the combination of gravimetric and hydrological methods, in showing how gravimeters are a highly useful tool for applied hydrological field research.
In the first part of the thesis, traditional setups of stationary terrestrial superconducting gravimeters are addressed. They are commonly installed within a dedicated building, the impermeable structure of which shields the underlying soil from natural exchange of water masses (infiltration, evapotranspiration, groundwater recharge). As gravimeters are most sensitive to mass changes directly beneath the meter, this could impede their suitability for local hydrological process investigations, especially for near-surface water storage changes (WSC). By studying temporal local hydrological dynamics at a dedicated site equipped with traditional hydrological measurement devices, both below and next to the building, the impact of these absent natural dynamics on the gravity observations were quantified. A comprehensive analysis with both a data-based and model-based approach led to the development of an alternative method for dealing with this limitation. Based on determinable parameters, this approach can be transferred to a broad range of measurement sites where gravimeters are deployed in similar structures. Furthermore, the extensive considerations on this topic enabled a more profound understanding of this so called umbrella effect.
The second part of the thesis is a pilot study about the field deployment of a superconducting gravimeter. A newly developed field enclosure for this gravimeter was tested in an outdoor installation adjacent to the building used to investigate the umbrella effect. Analyzing and comparing the gravity observations from both indoor and outdoor gravimeters showed performance with respect to noise and stable environmental conditions was equivalent while the sensitivity to near-surface WSC was highly increased for the field deployed instrument. Furthermore it was demonstrated that the latter setup showed gravity changes independent of the depth where mass changes occurred, given their sufficiently wide horizontal extent. As a consequence, the field setup suits monitoring of WSC for both short and longer time periods much better. Based on a coupled data-modeling approach, its gravity time series was successfully used to infer and quantify local water budget components (evapotranspiration, lateral subsurface discharge) on the daily to annual time scale.
The third part of the thesis applies data from a gravimeter field deployment for applied hydrological process investigations. To this end, again at the same site, a sprinkling experiment was conducted in a 15 x 15 m area around the gravimeter. A simple hydro-gravimetric model was developed for calculating the gravity response resulting from water redistribution in the subsurface. It was found that, from a theoretical point of view, different subsurface water distribution processes (macro pore flow, preferential flow, wetting front advancement, bypass flow and perched water table rise) lead to a characteristic shape of their resulting gravity response curve. Although by using this approach it was possible to identify a dominating subsurface water distribution process for this site, some clear limitations stood out. Despite the advantage for field installations that gravimetry is a non-invasive and integral method, the problem of non-uniqueness could only be overcome by additional measurements (soil moisture, electric resistivity tomography) within a joint evaluation. Furthermore, the simple hydrological model was efficient for theoretical considerations but lacked the capability to resolve some heterogeneous spatial structures of water distribution up to a needed scale. Nevertheless, this unique setup for plot to small scale hydrological process research underlines the high potential of gravimetery and the benefit of a field deployment.
The fourth and last part is dedicated to the evaluation of potential uncertainties arising from the processing of gravity observations. The gravimeter senses all mass variations in an integral way, with the gravitational attraction being directly proportional to the magnitude of the change and inversely proportional to the square of the distance of the change. Consequently, all gravity effects (for example, tides, atmosphere, non-tidal ocean loading, polar motion, global hydrology and local hydrology) are included in an aggregated manner. To isolate the signal components of interest for a particular investigation, all non-desired effects have to be removed from the observations. This process is called reduction. The large-scale effects (tides, atmosphere, non-tidal ocean loading and global hydrology) cannot be measured directly and global model data is used to describe and quantify each effect. Within the reduction process, model errors and uncertainties propagate into the residual, the result of the reduction. The focus of this part of the thesis is quantifying the resulting, propagated uncertainty for each individual correction. Different superconducting gravimeter installations were evaluated with respect to their topography, distance to the ocean and the climate regime. Furthermore, different time periods of aggregated gravity observation data were assessed, ranging from 1 hour up to 12 months. It was found that uncertainties were highest for a frequency of 6 months and smallest for hourly frequencies. Distance to the ocean influences the uncertainty of the non-tidal ocean loading component, while geographical latitude affects uncertainties of the global hydrological component. It is important to highlight that the resulting correction-induced uncertainties in the residual have the potential to mask the signal of interest, depending on the signal magnitude and its frequency. These findings can be used to assess the value of gravity data across a range of applications and geographic settings.
In an overarching synthesis all results and findings are discussed with a general focus on their added value for bringing hydrogravimetric field research to a new level. The conceptual and applied methodological benefits for hydrological studies are highlighted. Within an outlook for future setups and study designs, it was once again shown what enormous potential is offered by gravimeters as hydrological field tools.
This cumulative doctoral thesis consists of five empirical studies examining various aspects of crisis and change from a management-accounting perspective. Within the first study, a bibliometric analysis is conducted. More precisely, based on publications between the financial crisis (since 2007) and the COVID-19 crisis (starting in 2020), the crisis literature in management accounting is investigated to uncover the most influential aspects of the field and to analyze the theoretical foundations of the literature. Moreover, this investigation also serves to identify future research streams and to provide starting points for future research. Based on a survey, the second study investigates the impact of several management-accounting tools on organizational resilience and its effect on a company’s competitive advantage during a crisis. The results show that their target-oriented use positively influences organizational resilience and contributes to the company’s competitive advantage during the crisis. The third study provides a more detailed view on the relationship between budgeting and risk management and their benefit for companies in times of crisis. For this purpose, the relationship between the relevance of budgeting functions and risk management in the company and the corresponding impact on company performance are investigated. The results show a positive relationship. However, a crisis can also affect the relationship between the company and its shareholders: Thus, the fourth study – based on publicly available data and a survey – examines the consequences of virtual annual general meetings on shareholder rights. The results show that, temporarily, particularly the right to information was severely restricted. For the following year, this problem was fixed, and ultimately, the virtual option was introduced permanently. The crisis has thus brought about a lasting change. But not only crises cause changes: The fifth study, also based on survey data, investigates the changes in the role of management accountants caused by digitalization. More precisely, it investigates how management accountants deal with tasks that are considered outdated and unattractive. The results of the study show that different types of personalities also act differently as far as the willingness to do those unattractive tasks is concerned, and career ambitions also influence that willingness. In addition to this, the results provide insights into the motivation of management accountants to conduct tasks and thus counteract existing assumptions based on stereotypes and clichés circulating within the research community.
Potato FLC-like and SVP-like proteins jointly control growth and distinct developmental processes
(2023)
Based on worldwide consumption, Solanum tuberosum L. (potato) is the most important non-grain food crop. Potato has two ways of stable propagation: sexually via flowering and vegetatively via tuberization. Remarkably, these two developmental processes are controlled by similar molecular regulators and mechanisms. Given that FLC and SVP genes act as key flowering regulators in the model species Arabidopsis and in various other crop species, this study aimed at identifying FLC and SVP homologs in potato and investigating their roles in the regulation of plant development, with a particular focus on flowering and tuberization. Our analysis demonstrated that there are five FLC-like and three SVP like proteins encoded in the potato genome. The expression profiles of StFLCs and StSVPs throughout potato development and the detected interactions between their proteins indicate tissue specificity of the individual genes and distinct roles of a variety of putative protein complexes. In particular, we discovered that StFLC-D, as well as StFLC-B, StSVP-A, and StSVP-B play a complex role in the regulation of flowering time, as not only increased but also decreased levels of their transcripts promote earlier flowering. Most importantly, StFLC-D has a marked impact on tuberization under non-inductive conditions and susceptibility to temperature-induced tuber malformation, also known as second growth. Plants with decreased levels of StFLC-D demonstrated a strong ability to produce tubers under long days and appeared to be insensitive to temperature-induced second growth. Lastly, our data also suggests that StFLCs and StSVPs may be involved in the nitrogen-dependent regulation of potato development. Taken together, this study highlights the functional importance of StFLC and StSVP genes in the regulation of distinct developmental processes in potato.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Movement is a mechanism that shapes biodiversity patterns across spatialtemporal scales. Thereby, the movement process affects species interactions, population dynamics and community composition. In this thesis, I disentangled the effects of movement on the biodiversity of zooplankton ranging from the individual to the community level. On the individual movement level, I used video-based analysis to explore the implication of movement behavior on preypredator interactions. My results showed that swimming behavior was of great importance as it determined their survival in the face of predation. The findings also additionally highlighted the relevance of the defense status/morphology of prey, as it not only affected the prey-predator relationship by the defense itself but also by plastic movement behavior. On the community movement level, I used a field mesocosm experiment to explore the role of dispersal (time i.e., from the egg bank into the water body and space i.e., between water bodies) in shaping zooplankton metacommunities. My results revealed that priority effects and taxon-specific dispersal limitation influenced community composition. Additionally, different modes of dispersal also generated distinct community structures. The egg bank and biotic vectors (i.e. mobile links) played significant roles in the colonization of newly available habitat patches. One crucial aspect that influences zooplankton species after arrival in new habitats is the local environmental conditions. By using common garden experiments, I assessed the performance of zooplankton communities in their home vs away environments in a group of ponds embedded within an agricultural landscape. I identified environmental filtering as a driving factor as zooplankton communities from individual ponds developed differently in their home and away environments. On the individual species level, there was no consistent indication of local adaptation. For some species, I found a higher abundance/fitness in their home environment, but for others, the opposite was the case, and some cases were indifferent.
Overall, the thesis highlights the links between movement and biodiversity patterns, ranging from the individual active movement to the community level.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Life on Earth is diverse and ranges from unicellular organisms to multicellular creatures like humans. Although there are theories about how these organisms might have evolved, we understand little about how ‘life’ started from molecules. Bottom-up synthetic biology aims to create minimal cells by combining different modules, such as compartmentalization, growth, division, and cellular communication.
All living cells have a membrane that separates them from the surrounding aqueous medium and helps to protect them. In addition, all eukaryotic cells have organelles that are enclosed by intracellular membranes. Each cellular membrane is primarily made of a lipid bilayer with membrane proteins. Lipids are amphiphilic molecules that assemble into molecular bilayers consisting of two leaflets. The hydrophobic chains of the lipids in the two leaflets face each other, and their hydrophilic headgroups face the aqueous surroundings. Giant unilamellar vesicles (GUVs) are model membrane systems that form large compartments with a size of many micrometers and enclosed by a single lipid bilayer. The size of GUVs is comparable to the size of cells, making them good membrane models which can be studied using an optical microscope. However, after the initial preparation, GUV membranes lack membrane proteins which have to be reconstituted into these membranes by subsequent preparation steps. Depending on the protein, it can be either attached via anchor lipids to one of the membrane leaflets or inserted into the lipid bilayer via its transmembrane domains.
The first step is to prepare the GUVs and then expose them to an exterior solution with proteins. Various protocols have been developed for the initial preparation of GUVs. For the second step, the GUVs can be exposed to a bulk solution of protein or can be trapped in a microfluidic device and then supplied with the protein solution. To minimize the amount of solution and for more precise measurements, I have designed a microfluidic device that has a main channel, and several dead-end side channels that are perpendicular to the main channel. The GUVs are trapped in the dead-end channels. This design exchanges the solution around the GUVs via diffusion from the main channel, thus shielding the GUVs from the flow within the main channel. This device has a small volume of just 2.5 μL, can be used without a pump and can be combined with a confocal microscope, enabling uninterrupted imaging of the GUVs during the experiments. I used this device for most of the experiments on GUVs that are discussed in this thesis.
In the first project of the thesis, a lipid mixture doped with an anchor lipid was used that can bind to a histidine chain (referred to as His-tag(ged) or 6H) via the metal cation Ni2+. This method is widely used for the biofunctionalization of GUVs by attaching proteins without a transmembrane domain. Fluorescently labeled His-tags which are bound to a membrane can be observed in a confocal microscope. Using the same lipid mixture, I prepared the GUVs with different protocols and investigated the membrane composition of the resulting GUVs by evaluating the amount of fluorescently labeled His-tagged molecules bound to their membranes. I used the microfluidic device described above to expose the outer leaflet of the vesicle to a constant concentration of the His-tagged molecules. Two fluorescent molecules with a His-tag were studied and compared: green fluorescent protein (6H-GFP) and fluorescein isothiocyanate (6H-FITC). Although the quantum yield in solution is similar for both molecules, the brightness of the membrane-bound 6H-GFP is higher than the brightness of the membrane-bound 6H-FITC. The observed difference in the brightness reveals that the fluorescence of the 6H-FITC is quenched by the anchor lipid via the Ni2+ ion. Furthermore, my measurements also showed that the fluorescence intensity of the membranebound His-tagged molecules depends on microenvironmental factors such as pH. For both 6H-GFP and 6H-FITC, the interaction with the membrane is quantified by evaluating the equilibrium dissociation constant. The membrane fluorescence is measured as a function of the fluorophores’ molar concentration. Theoretical analysis of these data leads to the equilibrium dissociation constants of (37.5 ± 7.5) nM for 6H-GFP and (18.5 ± 3.7) nM for 6H-FITC.
The anchor lipid mentioned previously used the metal cation Ni2+ to mediate the bond between the anchor lipid and the His-tag. The Ni2+ ion can be replaced by other transition metal ions. Studies have shown that Co3+ forms the strongest bonds with the His-tags attached to proteins. In these studies, strong oxidizing agents were used to oxidize the Co2+ mediated complex with the His-tagged protein to a Co3+ mediated complex. This procedure puts the proteins at risk of being oxidized as well. In this thesis, the vesicles were first prepared with anchor lipids without any metal cation. The Co3+ was added to these anchor lipids and finally the His-tagged protein was added to the GUVs to form the Co3+ mediated bond. This system was also established using the microfluidic device.
The different preparation procedures of GUVs usually lead to vesicles with a spherical morphology. On the other hand, many cell organelles have a more complex architecture with a non spherical topology. One fascinating example is provided by the endoplasmic reticulum (ER) which is made of a continuous membrane and extends throughout the cell in the form of tubes and sheets. The tubes are connected by three-way junctions and form a tubular network of irregular polygons. The formation and maintenance of these reticular networks requires membrane proteins that hydrolyize guanosine triphosphate (GTP). One of these membrane proteins is atlastin. In this thesis, I reconstituted the atlastin protein in GUV membranes using detergent-assisted reconstitution protocols to insert the proteins directly into lipid bilayers.
This thesis focuses on protein reconstitution by binding His-tagged proteins to anchor lipids and by detergent-assisted insertion of proteins with transmembrane domains. It also provides the design of a microfluidic device that can be used in various experiments, one example is the evaluation of the equilibrium dissociation constant for membrane-protein interactions. The results of this thesis will help other researchers to understand the protocols for preparing GUVs, to reconstitute proteins in GUVs, and to perform experiments using the microfluidic device. This knowledge should be beneficial for the long-term goal of combining the different modules of synthetic biology to make a minimal cell.
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
Negotiations are a way of joint decision-making and thereby a form of social conflict. By determining the concrete allocation of scarce resources, negotiations have a great impact on the value creation of companies. If companies succeed in achieving better negotiation results in the long term, they can increase their profitability. Ensuring a company's negotiation success is therefore an organizational issue of central importance. While the question of ensuring individual negotiation success has been the subject and topic of multidisciplinary research for a long time, the question of how organizations can implement and ensure continuous negotiation success remains largely unexplored. This dissertation therefore aims to investigate how companies enable their employees to consistently achieve better negotiation outcomes. It is significant that, in the corporate context, negotiators do not act as individuals but as embedded representatives of an organization, and that negotiations are not one-time events but recurring necessities for the existence of the organization instead. In organizations, those recurring processes with a similar fundamental structure are handled by routines. A planned improvement of routines is often forced by new artifacts. In this context, artifacts refer to human-created technologies with which humans interact within routines and therefore artifacts have a central influence on executing the routine. If negotiation activities in companies are represented by organizational routines, one central issue for improving companies’ negotiation performance is the artifacts’ incorporation into organizational negotiation routines that facilitate the efficient application of the insights from negotiation research. The dissertation consists of three studies that were written as research papers to examine artifacts in the organizational negotiation context. The first study focuses on the pre-negotiation stage and presents four tools to assist negotiation practitioners in efficiently preparing for negotiation. The study examines the negotiation preparation’s effectiveness and efficiency and the negotiation outcome in a case-based experiment. The second study is devoted to a closer examination of the barriers that inhibit the adoption of negotiation support systems (NSSs) as one kind of organizational negotiation artifact. The investigation is conducted using a structural equation model based on information from participating practitioners. The third study is concerned with the future of negotiation support system research. An exploratory study based on qualitative in-depth interviews with proven and published experts in the field aims to evaluate the current state of research. The general discussion of the dissertation connects, summarizes, and concludes the study results and derives implications for practice, limitations, and future research ideas.
Social networking sites
(2023)
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
Arthur Schopenhauer (1788–1860) was perhaps the last polymath among the great Germanic philosophers. Switching with ease and elegance between epistemic positions and fields as diverse as idealism and empiricism, fideism and rationalism, realism and nominalism, art and religion, jurisprudence and politics, psychology and occultism, Schopenhauer erected an imposing edifice bearing testimony to his universal learning. This study is an investigation into the very conclusion of Schopenhauer’s philosophy and endeavours to answer the following question: did Schopenhauer’s doctrine of salvation issue forth organically from his intellectual output or was it annexed to his philosophy as a result of his critical engagement with religion? The labyrinthine paths through which Schopenhauer arrives at the soteriological culmination of his philosophy are subjected to critical assessment; the picture that emerges is of a philosopher who seemed convinced that he had solved some of the most pressing cosmic riddles to have tormented mankind through the ages.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
In X-ray computed tomography (XCT), an X-ray beam of intensity I0 is transmitted through an object and its attenuated intensity I is measured when it exits the object. The attenuation of the beam depends on the attenuation coefficients along its path. The attenuation coefficients provide information about the structure and composition of the object and can be determined through mathematical operations that are referred to as reconstruction.
The standard reconstruction algorithms are based on the filtered backprojection (FBP) of the measured data. While these algorithms are fast and relatively simple, they do not always succeed in computing a precise reconstruction, especially from under-sampled data.
Alternatively, an image or volume can be reconstructed by solving a system of linear equations. Typically, the system of equations is too large to be solved but its solution can be approximated by iterative methods, such as the Simultaneous Iterative Reconstruction Technique (SIRT) and the Conjugate Gradient Least Squares (CGLS).
This dissertation focuses on the development of a novel iterative algorithm, the Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT). After its reconstruction principle is explained, its performance is assessed for real parallel- and cone-beam CT (including under-sampled) data and compared to that of other established algorithms. Finally, it is demonstrated how the shape of the measured object can be modelled into DIRECTT to achieve even better reconstruction results.
Increasing demand for food, healthcare, and transportation arising from the growing world population is accompanied by and driving global warming challenges due to the rise of the atmospheric CO2 concentration. Industrialization for human needs has been increasingly releasing CO2 into the atmosphere for the last century or more. In recent years, the possibility of recycling CO2 to stabilize the atmospheric CO2 concentration and combat rising temperatures has gained attention. Thus, using CO2 as the feedstock to address future world demands is the ultimate solution while controlling the rapid climate change. Valorizing CO2 to produce activated and stable one-carbon feedstocks like formate and methanol and further upgrading them to industrial microbial processes to replace unsustainable feedstocks would be crucial for a future biobased circular economy. However, not all microbes can grow on formate as a feedstock, and those microbes that can grow are not well established for industrial processes.
S. cerevisiae is one of the industrially well-established microbes, and it is a significant contributor to bioprocess industries. However, it cannot grow on formate as a sole carbon and energy source. Thus, engineering S. cerevisiae to grow on formate could potentially pave the way to sustainable biomass and value-added chemicals production.
The Reductive Glycine Pathway (RGP), designed as the aerobic twin of the anaerobic Reductive Acetyl-CoA pathway, is an efficient formate and CO2 assimilation pathway. The RGP comprises of the glycine synthesis module (Mis1p, Gcv1p, Gcv2p, Gcv3p, and Lpd1p), the glycine to serine conversion module (Shmtp), the pyruvate synthesis module (Cha1p), and the energy supply module (Fdh1p). The RGP requires formate and elevated CO2 levels to operate the glycine synthesis module. In this study, I established the RGP in the yeast system using growth-coupled selection strategies to achieve formate and CO2-dependent biomass formation in aerobic conditions.
Firstly, I constructed serine biosensor strains by disrupting the native serine and glycine biosynthesis routes in the prototrophic S288c and FL100 yeast strains and insulated serine, glycine, and one-carbon metabolism from the central metabolic network. These strains cannot grow on glucose as the sole carbon source but require the supply of serine or glycine to complement the engineered auxotrophies. Using growth as a readout, I employed these strains as selection hosts to establish the RGP. Initially, to achieve this, I engineered different serine-hydroxymethyltransferases in the genome of serine biosensor strains for efficient glycine to serine conversion. Then, I implemented the glycine synthesis module of the RGP in these strains for the glycine and serine synthesis from formate and CO2. I successfully conducted Adaptive Laboratory Evolution (ALE) using these strains, which yielded a strain capable of glycine and serine biosynthesis from formate and CO2. Significant growth improvements from 0.0041 h-1 to 0.03695 h-1 were observed during ALE. To validate glycine and serine synthesis, I conducted carbon tracing experiments with 13C formate and 13CO2, confirming that more than 90% of glycine and serine biosynthesis in the evolved strains occurs via the RGP. Interestingly, labeling data also revealed that 10-15% of alanine was labelled, indicating pyruvate synthesis from the formate-derived serine using native serine deaminase (Cha1p) activity. Thus, RGP contributes to a small pyruvate pool which is converted to alanine without any selection pressure for pyruvate synthesis from formate. Hence, this data confirms the activity of all three modules of RGP even in the presence of glucose. Further, ALE in glucose limiting conditions did not improve pyruvate flux via the RGP.
Growth characterization of these strains showed that the best growth rates were achieved in formate concentrations between 25 mM to 300 mM. Optimum growth required 5% CO2, and dropped when the CO2 concentration was reduced from 5% to 2.5%.
Whole-genome sequencing of these evolved strains revealed mutations in genes that encode Gdh1p, Pet9p, and Idh1p. These enzymes might influence intracellular NADPH, ATP, and NADH levels, indicating adjustment to meet the energy demand of the RGP. I reverse-engineered the GDH1 truncation mutation on unevolved serine biosensor strains and reproduced formate dependent growth. To elucidate the effect of the GDH1 mutation on formate assimilation, I reintroduced this mutation in the S288c strain and conducted carbon-tracing experiments to compared formate assimilation between WT and ∆gdh1 mutant strains. Comparatively, enhanced formate assimilation was recorded in the ∆gdh1 mutant strain.
Although the 13C carbon tracing experiments confirmed the activity of all three modules of the RGP, the overall pyruvate flux via the RGP might be limited by the supply of reducing power. Hence, in a different approach, I overexpressed the formate dehydrogenase (Fdh1p) for energy supply and serine deaminase (Cha1p) for active pyruvate synthesis in the S288c parental strain and established growth on formate and serine without glucose in the medium. Further reengineering and evolution of this strain with a consistent energy, and formate-derived serine supply for pyruvate synthesis, is essential to achieve complete formatotrophic growth in the yeast system.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
Modern datasets often exhibit diverse, feature-rich, unstructured data, and they are massive in size. This is the case of social networks, human genome, and e-commerce databases. As Artificial Intelligence (AI) systems are increasingly used to detect pattern in data and predict future outcome, there are growing concerns on their ability to process large amounts of data. Motivated by these concerns, we study the problem of designing AI systems that are scalable to very large and heterogeneous data-sets.
Many AI systems require to solve combinatorial optimization problems in their course of action. These optimization problems are typically NP-hard, and they may exhibit additional side constraints. However, the underlying objective functions often exhibit additional properties. These properties can be exploited to design suitable optimization algorithms. One of these properties is the well-studied notion of submodularity, which captures diminishing returns. Submodularity is often found in real-world applications. Furthermore, many relevant applications exhibit generalizations of this property.
In this thesis, we propose new scalable optimization algorithms for combinatorial problems with diminishing returns. Specifically, we focus on three problems, the Maximum Entropy Sampling problem, Video Summarization, and Feature Selection. For each problem, we propose new algorithms that work at scale. These algorithms are based on a variety of techniques, such as forward step-wise selection and adaptive sampling. Our proposed algorithms yield strong approximation guarantees, and the perform well experimentally.
We first study the Maximum Entropy Sampling problem. This problem consists of selecting a subset of random variables from a larger set, that maximize the entropy. By using diminishing return properties, we develop a simple forward step-wise selection optimization algorithm for this problem. Then, we study the problem of selecting a subset of frames, that represent a given video. Again, this problem corresponds to a submodular maximization problem. We provide a new adaptive sampling algorithm for this problem, suitable to handle the complex side constraints imposed by the application. We conclude by studying Feature Selection. In this case, the underlying objective functions generalize the notion of submodularity. We provide a new adaptive sequencing algorithm for this problem, based on the Orthogonal Matching Pursuit paradigm.
Overall, we study practically relevant combinatorial problems, and we propose new algorithms to solve them. We demonstrate that these algorithms are suitable to handle massive datasets. However, our analysis is not problem-specific, and our results can be applied to other domains, if diminishing return properties hold. We hope that the flexibility of our framework inspires further research into scalability in AI.
Answer Set Programming (ASP) allows us to address knowledge-intensive search and optimization problems in a declarative way due to its integrated modeling, grounding, and solving workflow. A problem is modeled using a rule based language and then grounded and solved. Solving results in a set of stable models that correspond to solutions of the modeled problem. In this thesis, we present the design and implementation of the clingo system---perhaps, the most
widely used ASP system. It features a rich modeling language originating from the field of knowledge representation and reasoning, efficient grounding algorithms based on database evaluation techniques, and high performance solving algorithms based on Boolean satisfiability (SAT) solving technology.
The contributions of this thesis lie in the design of the modeling language, the design and implementation of the grounding algorithms, and the design and implementation of an Application Programmable Interface (API) facilitating the use of ASP in real world applications and the implementation of complex forms of reasoning beyond the traditional ASP workflow.
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
Housing in metabolic cages can induce a pronounced stress response. Metabolic cage systems imply housing mice on metal wire mesh for the collection of urine and feces in addition to monitoring food and water intake. Moreover, mice are single-housed, and no nesting, bedding, or enrichment material is provided, which is often argued to have a not negligible impact on animal welfare due to cold stress. We therefore attempted to reduce stress during metabolic cage housing for mice by comparing an innovative metabolic cage (IMC) with a commercially available metabolic cage from Tecniplast GmbH (TMC) and a control cage. Substantial refinement measures were incorporated into the IMC cage design. In the frame of a multifactorial approach for severity assessment, parameters such as body weight, body composition, food intake, cage and body surface temperature (thermal imaging), mRNA expression of uncoupling protein 1 (Ucp1) in brown adipose tissue (BAT), fur score, and fecal corticosterone metabolites (CMs) were included. Female and male C57BL/6J mice were single-housed for 24 h in either conventional Macrolon cages (control), IMC, or TMC for two sessions. Body weight decreased less in the IMC (females—1st restraint: 6.94%; 2nd restraint: 6.89%; males—1st restraint: 8.08%; 2nd restraint: 5.82%) compared to the TMC (females—1st restraint: 13.2%; 2nd restraint: 15.0%; males—1st restraint: 13.1%; 2nd restraint: 14.9%) and the IMC possessed a higher cage temperature (females—1st restraint: 23.7°C; 2nd restraint: 23.5 °C; males—1st restraint: 23.3 °C; 2nd restraint: 23.5 °C) compared with the TMC (females—1st restraint: 22.4 °C; 2nd restraint: 22.5 °C; males—1st restraint: 22.6 °C; 2nd restraint: 22.4 °C). The concentration of fecal corticosterone metabolites in the TMC (females—1st restraint: 1376 ng/g dry weight (DW); 2nd restraint: 2098 ng/g DW; males—1st restraint: 1030 ng/g DW; 2nd restraint: 1163 ng/g DW) was higher compared to control cage housing (females—1st restraint:
640 ng/g DW; 2nd restraint: 941 ng/g DW; males—1st restraint: 504 ng/g DW; 2nd restraint: 537 ng/g DW). Our results show the stress potential induced by metabolic cage restraint that is markedly influenced by the lower housing temperature. The IMC represents a first attempt to target cold stress reduction during metabolic cage application thereby producing more animal welfare friendly data.
Essays in public economics
(2023)
This cumulative dissertation uses economic theory and micro-econometric tools and evaluation methods to analyse public policies and their impact on welfare and individual behaviour. In particular, it focuses on policies in two distinct areas that represent fundamental societal challenges in the 21st century: the ageing of society and life in densely-populated urban agglomerations. Together, these areas shape important financial decisions in a person's life, impact welfare, and are driving forces behind many of the challenges in today's societies. The five self-contained research chapters of this thesis analyse the forward looking effects of pension reforms, affordable housing policies as well as a public transport subsidy and its effect on air pollution.
Control over spin and electronic structure of MoS₂ monolayer via interactions with substrates
(2023)
The molybdenum disulfide (MoS2) monolayer is a semiconductor with a direct bandgap while it is a robust and affordable material.
It is a candidate for applications in optoelectronics and field-effect transistors.
MoS2 features a strong spin-orbit coupling which makes its spin structure promising for acquiring the Kane-Mele topological concept with corresponding applications in spintronics and valleytronics.
From the optical point of view, the MoS2 monolayer features two valleys in the regions of K and K' points. These valleys are differentiated by opposite spins and a related valley-selective circular dichroism.
In this study we aim to manipulate the MoS2 monolayer spin structure in the vicinity of the K and K' points to explore the possibility of getting control over the optical and electronic properties.
We focus on two different substrates to demonstrate two distinct routes: a gold substrate to introduce a Rashba effect and a graphene/cobalt substrate to introduce a magnetic proximity effect in MoS2.
The Rashba effect is proportional to the out-of-plane projection of the electric field gradient. Such a strong change of the electric field occurs at the surfaces of a high atomic number materials and effectively influence conduction electrons as an in-plane magnetic field. A molybdenum and a sulfur are relatively light atoms, thus, similar to many other 2D materials, intrinsic Rashba effect in MoS2 monolayer is vanishing small. However, proximity of a high atomic number substrate may enhance Rashba effect in a 2D material as it was demonstrated for graphene previously.
Another way to modify the spin structure is to apply an external magnetic field of high magnitude (several Tesla), and cause a Zeeman splitting, the conduction electrons.
However, a similar effect can be reached via magnetic proximity which allows us to reduce external magnetic fields significantly or even to zero. The graphene on cobalt interface is ferromagnetic and stable for MoS2 monolayer synthesis. Cobalt is not the strongest magnet; therefore, stronger magnets may lead to more significant results.
Nowadays most experimental studies on the dichalcogenides (MoS2 included) are performed on encapsulated heterostructures that are produced by mechanical exfoliation.
While mechanical exfoliation (or scotch-tape method) allows to produce a huge variety of structures, the shape and the size of the samples as well as distance between layers in heterostructures are impossible to control reproducibly.
In our study we used molecular beam epitaxy (MBE) methods to synthesise both MoS2/Au(111) and MoS2/graphene/Co systems.
We chose to use MBE, as it is a scalable and reproducible approach, so later industry may adapt it and take over.
We used graphene/cobalt instead of just a cobalt substrate because direct contact of MoS2\ monolayer and a metallic substrate may lead to photoluminescence (PL) quenching in the metallic substrate. Graphene and hexagonal boron nitride monolayer are considered building blocks of a new generation of electronics also commonly used as encapsulating materials for PL studies. Moreover graphene is proved to be a suitable substrate for the MBE growth of transitional metal dichalcogenides (TMDCs).
In chapter 1,
we start with an introduction to TMDCs. Then we focus on MoS2 monolayer state of the art research in the fields of application scenario; synthesis approaches; electronic, spin, and optical properties; and interactions with magnetic fields and magnetic materials.
We briefly touch the basics of magnetism in solids and move on to discuss various magnetic exchange interactions and magnetic proximity effect.
Then we describe MoS2 optical properties in more detail. We start from basic exciton physics and its manifestation in the MoS2 monolayer. We consider optical selection rules in the MoS2 monolayer and such properties as chirality, spin-valley locking, and coexistence of bright and dark excitons.
Chapter 2 contains an overview of the employed surface science methods: angle-integrated, angle-resolved, and spin-resolved photoemission; low energy electron diffraction and scanning tunneling microscopy.
In chapter 3, we describe MoS2 monolayer synthesis details for two substrates: gold monocrystal with (111) surface and graphene on cobalt thin film with Co(111) surface orientation.
The synthesis descriptions are followed by a detailed characterisation of the obtained structures: fingerprints of MoS2 monolayer formation; MoS2 monolayer symmetry and its relation to the substrate below; characterisation of MoS2 monolayer coverage, domain distribution, sizes and shapes, and moire structures.
In chapter~4, we start our discussion with MoS2/Au(111) electronic and spin structure. Combining density functional theory computations (DFT) and spin-resolved photoemission studies, we demonstrate that the MoS2 monolayer band structure features an in-plane Rashba spin splitting. This confirms the possibility of MoS2 monolayer spin structure manipulation via a substrate.
Then we investigate the influence of a magnetic proximity in the MoS2/graphene/Co system on the MoS2 monolayer spin structure.
We focus our investigation on MoS2 high symmetry points: G and K.
First, using spin-resolved measurements, we confirm that electronic states are spin-split at the G point via a magnetic proximity effect. Second, combining spin-resolved measurements and DFT computations for MoS2 monolayer in the K point region, we demonstrate the appearance of a small in-plane spin polarisation in the valence band top and predict a full in-plane spin polarisation for the conduction band bottom.
We move forward discussing how these findings are related to the MoS2 monolayer optical properties, in particular the possibility of dark exciton observation. Additionally, we speculate on the control of the MoS2 valley energy via magnetic proximity from cobalt.
As graphene is spatially buffering the MoS2 monolayer from the Co thin film, we speculate on the role of graphene in the magnetic proximity transfer by replacing graphene with vacuum and other 2D materials in our computations.
We finish our discussion by investigating the K-doped MoS2/graphene/Co system and the influence of this doping on the electronic and spin structure as well as on the magnetic proximity effect.
In summary, using a scalable MBE approach we synthesised
MoS2/Au(111) and MoS2/graphene/Co systems. We found a Rashba effect taking place in MoS2/Au(111) which proves that the MoS2 monolayer in-plane spin structure can be modified. In MoS2/graphene/Co the in-plane magnetic proximity effect indeed takes place which rises the possibility of fine tuning the MoS2 optical properties via manipulation of the the substrate magnetisation.
During the Cenozoic, global cooling and uplift of the Tian Shan, Pamir, and Tibetan plateau modified atmospheric circulation and reduced moisture supply to Central Asia. These changes led to aridification in the region during the Neogene. Afterwards, Quaternary glaciations led to modification of the landscape and runoff.
In the Issyk-Kul basin of the Kyrgyz Tian Shan, the sedimentary sequences reflect the development of the adjacent ranges and local climatic conditions. In this work, I reconstruct the late Miocene – early Pleistocene depositional environment, climate, and lake development in the Issyk-Kul basin using facies analyses and stable δ18O and δ13C isotopic records from sedimentary sections dated by magnetostratigraphy and 26Al/10Be isochron burial dating. Also, I present 10Be-derived millennial-scale modern and paleo-denudation rates from across the Kyrgyz Tian Shan and long-term exhumation rates calculated from published thermochronology data. This allows me to examine spatial and temporal changes in surface processes in the Kyrgyz Tian Shan.
In the Issyk-Kul basin, the style of fluvial deposition changed at ca. 7 Ma, and aridification in the basin commenced concurrently, as shown by magnetostratigraphy and the δ18O and δ13C data. Lake formation commenced on the southern side of the basin at ca. 5 Ma, followed by a ca. 2 Ma local depositional hiatus. 26Al/10Be isochron burial dating and paleocurrent analysis show that the Kungey range to the north of the basin grew eastward, leading to a change from fluvial-alluvial deposits to proximal alluvial fan conglomerates at 5-4 Ma in the easternmost part of the basin. This transition occurred at 2.6-2.8 Ma on the southern side of the basin, synchronously with the intensification of the Northern Hemisphere glaciation. The paleo-denudation rates from 2.7-2.0 Ma are as low as long-term exhumation rates, and only the millennial-scale denudation rates record an acceleration of denudation.
This work concludes that the growth of the ranges to the north of the basin led to creation of the topographic barrier at ca. 7 Ma and a subsequent aridification in the Issyk-Kul basin. Increased subsidence and local tectonically-induced river system reorganization on the southern side of the basin enabled lake formation at ca. 5 Ma, while growth of the Kungey range blocked westward-draining rivers and led to sediment starvation and lake expansion. Denudational response of the Kyrgyz Tian Shan landscape is delayed due to aridity and only substantial cooling during the late Quaternary glacial cycles led to notable acceleration of denudation. Currently, increased glacier reduction and runoff controls a more rapid denudation of the northern slope of the Terskey range compared to other ranges of the Kyrgyz Tian Shan.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
An exploration of activity and therapist preferences and their predictors in German-speaking samples
(2023)
According to current definitions of evidence-based practice, patients’ preferences play an important role for the psychotherapeutic process and outcomes. However, whereas a significant body of research investigated preferences regarding specific treatments, research on preferred activities or therapist characteristics is rare, investigated heterogeneous aspects with inconclusive results, lacked validated assessment tools, and neglected relevant preferences, their predictors as well as the perspective of mental health professionals. Therefore, the three studies of this dissertation aimed to address the most fundamental drawbacks in current preference research by providing a validated questionnaire, focus efforts on activity and therapist preferences and add preferences of psychotherapy trainees. To this end, Paper I reports the translation and validation of the 18-item Cooper-Norcross Inventory of Preference (C-NIP) in a broad, heterogeneous sample of N = 969 laypeople, resulting in good to acceptable reliabilities and first evidence of validity. However, the original factor structure was not replicated. Paper II assesses activity preferences of psychotherapists in training using the C-NIP and compares them with the initial laypeople sample. There were significant differences between both samples, with trainees preferring a more patient-directed, emotionally intense and confrontational approach than laypeople. CBT trainees preferred a more therapist-directed, present-focused, challenging and less emotional intense approach than psychodynamic or -analytic trainees. Paper III explores therapist preferences and tests predictors for specific preference choices. For most characteristics, more than half of the participants did not have specific preferences. Results pointed towards congruency effects (i.e., preference for similar characteristics), especially for members of marginalized groups. The dissertation provides both researchers and practitioners with a validated questionnaire, shows potentially obstructive differences between patients and therapists and underlines the importance of therapist characteristics for marginalized groups, thereby laying the foundation for future applications and implementations in research and practice.
Distributed decision-making studies the choices made among a group of interactive and self-interested agents. Specifically, this thesis is concerned with the optimal sequence of choices an agent makes as it tries to maximize its achievement on one or multiple objectives in the dynamic environment. The optimization of distributed decision-making is important in many real-life applications, e.g., resource allocation (of products, energy, bandwidth, computing power, etc.) and robotics (heterogeneous agent cooperation on games or tasks), in various fields such as vehicular network, Internet of Things, smart grid, etc.
This thesis proposes three multi-agent reinforcement learning algorithms combined with game-theoretic tools to study strategic interaction between decision makers, using resource allocation in vehicular network as an example. Specifically, the thesis designs an interaction mechanism based on second-price auction, incentivizes the agents to maximize multiple short-term and long-term, individual and system objectives, and simulates a dynamic environment with realistic mobility data to evaluate algorithm performance and study agent behavior.
Theoretical results show that the mechanism has Nash equilibria, is a maximization of social welfare and Pareto optimal allocation of resources in a stationary environment. Empirical results show that in the dynamic environment, our proposed learning algorithms outperform state-of-the-art algorithms in single and multi-objective optimization, and demonstrate very good generalization property in significantly different environments. Specifically, with the long-term multi-objective learning algorithm, we demonstrate that by considering the long-term impact of decisions, as well as by incentivizing the agents with a system fairness reward, the agents achieve better results in both individual and system objectives, even when their objectives are private, randomized, and changing over time. Moreover, the agents show competitive behavior to maximize individual payoff when resource is scarce, and cooperative behavior in achieving a system objective when resource is abundant; they also learn the rules of the game, without prior knowledge, to overcome disadvantages in initial parameters (e.g., a lower budget).
To address practicality concerns, the thesis also provides several computational performance improvement methods, and tests the algorithm in a single-board computer. Results show the feasibility of online training and inference in milliseconds.
There are many potential future topics following this work. 1) The interaction mechanism can be modified into a double-auction, eliminating the auctioneer, resembling a completely distributed, ad hoc network; 2) the objectives are assumed to be independent in this thesis, there may be a more realistic assumption regarding correlation between objectives, such as a hierarchy of objectives; 3) current work limits information-sharing between agents, the setup befits applications with privacy requirements or sparse signaling; by allowing more information-sharing between the agents, the algorithms can be modified for more cooperative scenarios such as robotics.
This research focuses on empowering leadership, a leadership style that shares autonomy and responsibilities with the followers. Empowering leadership enhances the meaningfulness of work by fostering participation in decision-making, expressing confidence in high performance, and providing autonomy in target setting (Cheong, 2016). I examine how empowering leadership affects followers’ reflection. I used data from 528 individuals across 172 teams and found a positive relationship between empowering leadership and followers’ reflection. Followers’ reflection, in turn, is negatively associated with followers’ withdrawal, which mediates the beneficial effect of empowering leadership on leaders’ emotional exhaustion. As for the leaders, I propose that empowering leadership is negatively related also to leaders’ emotional exhaustion. This research broadens our understanding of empowering leadership's effects on both followers and leaders. Moreover, it integrates empowering leadership, leader emotional exhaustion, and burnout literature. Overall, empowering leadership strengthens members’ reflective attitudes and behaviors, which result in reduced withdrawal (and increased presence and contribution) in teams. Because the members contribute to team effort more, the leaders experience less emotional exhaustion. Hence, my work not only identifies new ways through which empowering leadership positively affects followers but also shows how these positive effects on followers benefit the leaders’ well-being.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
This book provides empirical evidence that all States have a universally binding obligation to adopt national laws and international treaties to protect the marine environment, including the designation of Marine Protected Areas. Chapter by chapter this obligation is detailed, providing the foundation for holding States responsible for fulfilling this obligation. The fundamentals are analysed in a preliminary chapter, which examines the legally binding sources of the Law of the Sea as well as its historical development to help readers understand the key principles at hand.
The Law of the Sea provides more than 1000 instruments and more than 300 regulations concerning marine protection. While the scope of most treaties is limited either regarding species, regions or activities, one regulation addresses States in all waters: the obligation to protect and preserve the marine environment as stipulated under Art. 192 of the 1982 United Nations Convention on the Law of the Sea (UNCLOS). As this ‘Constitution of the Ocean’ not only contains conventional laws but also very broadly reflects pre-existing rules of customary international law, an extensive analysis of all statements made by States in the UN General Assembly, their practices, national laws and regulations as well as other public testimonials demonstrates that Art. 192 UNCLOS indeed binds the whole community of States as a rule of customary international law with an erga omnes effect. Due to the lack of any objections and its fundamental value for humankind, this regulation can also be considered a new peremptory norm of international law (ius cogens).
While the sovereign equality of States recognises States’ freedom to decide if and how to enter into a given obligation, States can also waive this freedom. If States accepted a legally binding obligation, they are thus bound to it. Concerning the specific content of Art. 192 UNCLOS, a methodical interpretation concludes that only the adoption of legislative measures (national laws and international agreements) suffices to comply with the obligation to protect and preserve the marine environment, which is confirmed by the States’ practices and relevant jurisprudence. When applied to a specific geographical area, legislative measures to protect the marine environment concur with the definition of Marine Protected Areas. Nonetheless, as the obligation applies to all waters, the Grotian principle of the freedom of the sea dictates that the restriction of activities through the designation of Marine Protected Areas, on the one hand, must be weighed against the freedoms of other States on the other. To anticipate the result: while all other rights under the UNCLOS are subject to and contingent on other regulations of the UNCLOS and international law, only the obligation to protect and preserve the marine environment is granted absolutely – and thus outweighs all other interests
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.