Refine
Has Fulltext
- yes (102) (remove)
Year of publication
- 2024 (102) (remove)
Document Type
- Doctoral Thesis (102) (remove)
Is part of the Bibliography
- yes (102)
Keywords
- Arctic (4)
- Arktis (4)
- Klimawandel (4)
- climate change (4)
- Geophysik (3)
- Kohlenstoff (3)
- Satzverarbeitung (3)
- carbon (3)
- digitale Transformation (3)
- geophysics (3)
Institute
- Institut für Biochemie und Biologie (16)
- Institut für Physik und Astronomie (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Extern (13)
- Institut für Geowissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (8)
- Institut für Chemie (7)
- Department Linguistik (6)
- Fachgruppe Betriebswirtschaftslehre (3)
- Department Erziehungswissenschaft (2)
- Department Psychologie (2)
- Digital Engineering Fakultät (2)
- Fachgruppe Politik- & Verwaltungswissenschaft (2)
- Historisches Institut (2)
- Institut für Ernährungswissenschaft (2)
- Strukturbereich Kognitionswissenschaften (2)
- Wirtschaftswissenschaften (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Department für Inklusionspädagogik (1)
- Deutsches GeoForschungsZentrum GFZ (1)
- Institut für Anglistik und Amerikanistik (1)
- Institut für Informatik und Computational Science (1)
- Institut für Jüdische Theologie (1)
- Institut für Künste und Medien (1)
- Institut für Mathematik (1)
- Patholinguistics/Neurocognition of Language (1)
- Öffentliches Recht (1)
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
The European Alps are amongst the regions with highest glacier mass loss rates over the last decades. Under the threat of ongoing climate change, the ability to predict glacier mass balance changes for water and risk management purposes has become imperative. This raises an urgent need for reliable glacier models. The European Alps do not only host glaciers, but also numerous caves containing carbonate formations, called speleothems. Previous studies have shown that those speleothems also grew during times when the cave was covered by a warm-based glacier. In this thesis, I utilise speleothems from the European Alps as archives of local, environmental conditions related to mountain glacier evolution.
Previous studies have shown that speleothem isotope data from the Alps can be strongly affected by in-cave processes. Therefore, part of this thesis focusses on developing an isotope evolution model, which successfully reproduces differences between contemporaneous growing speleothems. The model is used to propose correction approaches for prior calcite precipitation effects on speleothem oxygen isotopes (δ18O). Applications on speleothem records from caves outside of the Alps demonstrate that corrected δ18O agrees better with other records and climate model simulations.
Existing speleothem growth histories and carbon isotope (δ13C) records from Alpine caves located at different elevations are used to infer soil vs. glacier cover and the thermal regime of the glacier over the last glacial cycle. The compatibility with glacier evolution models is statistically assessed. A general agreement between speleothem δ13C-derived information on soil vs. glacier presence and modelled glacier coverage is found. However, glacier retreat during Marine Isotope Stage (MIS) 3 seems to be underestimated by the model. Furthermore, speleothem data provides evidence of surface temperature above the freezing point which is, however, not fully reproduced by the simulations.
History of glacier cover and their thermal regime is explored for the high-elevation cave system Melchsee-Frutt in the Swiss Alps. Based on new (MIS 9b – MIS 7b, MIS 2) and available speleothem δ13C (MIS 7a – 5d) data, warm-based glacier cover is inferred for MIS 8, 7d, 6, and 2. Also a short period of cold-based ice coverage is found for early MIS 6. In a detailed multi-proxy analysis (δ18O, δ13C, Mg/Ca and Sr/Ca), millennial-scale changes in the glacier-related source of the water infiltrating in the karst during MIS 8 and 7d are found and linked to Northern Hemisphere climate variability.
While speleothem records from high-elevation cave sites in the Alps exhibit huge potential for glacier reconstruction, several limitations remain, which are discussed throughout this thesis. Ultimately, recommendations are given to further leverage subglacial speleothems as an archive of glacier dynamics.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor.
In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo.
In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations.
The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO).
The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
The reliance on fossil fuels has resulted in an abnormal increase in the concentration of greenhouse gases, contributing to the global climate crisis. In response, a rapid transition to renewable energy sources has begun, particularly lithium-ion batteries, playing a crucial role in the green energy transformation. However, concerns regarding the availability and geopolitical implications of lithium have prompted the exploration of alternative rechargeable battery systems, such as sodium-ion batteries. Sodium is significantly abundant and more homogeneously distributed in the crust and seawater, making it easier and less expensive to extract than lithium. However, because of the mysterious nature of its components, sodium-ion batteries are not yet sufficiently advanced to take the place of lithium-ion batteries. Specifically, sodium exhibits a more metallic character and a larger ionic radius, resulting in a different ion storage mechanism utilized in lithium-ion batteries. Innovations in synthetic methods, post-treatments, and interface engineering clearly demonstrate the significance of developing high-performance carbonaceous anode materials for sodium-ion batteries. The objective of this dissertation is to present a systematic approach for fabricating efficient, high-performance, and sustainable carbonaceous anode materials for sodium-ion batteries. This will involve a comprehensive investigation of different chemical environments and post-modification techniques as well.
This dissertation focuses on three main objectives. Firstly, it explores the significance of post-synthetic methods in designing interfaces. A conformal carbon nitride coating is deposited through chemical vapor deposition on a carbon electrode as an artificial solid-electrolyte interface layer, resulting in improved electrochemical performance. The interaction between the carbon nitride artificial interface and the carbon electrode enhances initial Coulombic efficiency, rate performance, and total capacity. Secondly, a novel process for preparing sulfur-rich carbon as a high-performing anode material for sodium-ion batteries is presented. The method involves using an oligo-3,4-ethylenedioxythiophene precursor for high sulfur content hard carbon anode to investigate the sulfur heteroatom effect on the electrochemical sodium storage mechanism. By optimizing the condensation temperature, a significant transformation in the materials’ nanostructure is achieved, leading to improved electrochemical performance. The use of in-operando small-angle X-ray scattering provides valuable insights into the interaction between micropores and sodium ions during the electrochemical processes. Lastly, the development of high-capacity hard carbon, derived from 5-hydroxymethyl furfural, is examined. This carbon material exhibits exceptional performance at both low and high current densities. Extensive electrochemical and physicochemical characterizations shed light on the sodium storage mechanism concerning the chemical environment, establishing the material’s stability and potential applications in sodium-ion batteries.
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Ausgehend von der Beobachtung, dass die aktuelle Digitalisierungsforschung die Ambivalenz der Digitalisierung zwar erkennt, aber nicht zum Gegenstand ihrer Analysen macht, fokussiert die vorliegende kumulative Dissertation auf die ambivalente Dichotomie aus Potenzialen und Problemen, die mit digitalen Transformationen von Organisationen einhergeht. Entlang von sechs Publikationen wird mit einem systemtheoretischen Blick auf Organisationen die spannungsvolle Dichotomie hinsichtlich dreier ambivalenter Verhältnisse aufgezeigt: Erstens wird in Bezug auf das Verhältnis von Digitalisierung und Postbürokratie deutlich, dass digitale Transformationen das Potenzial aufweisen, postbürokratische Arbeitsweisen zu erleichtern. Parallel ergibt sich das Problem, dass auf Konsens basierende postbürokratische Strukturen Digitalisierungsinitiativen erschweren, da diese auf eine Vielzahl von Entscheidungen angewiesen sind. Zweitens zeigt sich mit Blick auf das ambivalente Verhältnis von Digitalisierung und Vernetzung, dass einerseits organisationsweite Kooperation ermöglicht wird, während sich andererseits die Gefahr digitaler Widerspruchskommunikation auftut. Beim dritten Verhältnis zwischen Digitalisierung und Gender deutet sich das mit neuen digitalen Technologien einhergehende Potenzial für Gender Inklusion an, während zugleich das Problem einprogrammierter Gender Biases auftritt, die Diskriminierungen oftmals verschärfen. Durch die Gegenüberstellung der Potenziale und Probleme wird nicht nur die Ambivalenz organisationaler Digitalisierung analysierbar und verständlich, es stellt sich auch heraus, dass mit digitalen Transformationen einen doppelte Formalisierung einhergeht: Organisationen werden nicht nur mit den für Reformen üblichen Anpassungen der formalen Strukturen konfrontiert, sondern müssen zusätzlich formale Entscheidungen zu Technikeinführung und -beibehaltung treffen sowie formale Lösungen etablieren, um auf unvorhergesehene Potenziale und Probleme reagieren. Das Ziel der Dissertation ist es, eine analytisch generalisierte Heuristik an die Hand zu geben, mit deren Hilfe die Errungenschaften und Chancen digitaler Transformationen identifiziert werden können, während sich parallel ihr Verhältnis zu den gleichzeitig entstehenden Herausforderungen und Folgeproblemen erklären lässt.
Die Arbeit „Die Bekämpfung transnationaler Kriminalität im Kontext fragiler Staatlichkeit“ widmet sich dem Phänomen grenzüberschreitend tätiger Akteure der organisierten Kriminalität die den Umstand ausnutzen, dass einige international anerkannte Regierungen nur eine unzureichende Kontrolle über Teile ihres Staatsgebietes ausüben. Es wird untersucht, weshalb der durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, zur Bekämpfung transnationaler Kriminalitätsphänomene im Kontext dieser fragilen Staaten, nicht oder nur defizitär dazu beiträgt die Kriminalitätsphänomene zu bekämpfen.
Nachdem zunächst erörtert wird, was im Rahmen der Untersuchung unter dem Begriff der transnationalen Kriminalität verstanden wird, wird der internationale Rechtsrahmen zur Bekämpfung anhand von fünf beispielhaft ausgewählten transnationalen Kriminalitätsphänomenen beschrieben. Im darauffolgenden Hauptteil der Untersuchung wird der Frage nachgegangen, weshalb dieser durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, gerade in fragilen Staaten, kaum einen Beitrag dazu leistet solchen Kriminalitätsphänomenen effektiv zu begegnen. Dabei wird festgestellt, dass die Genese des internationalen Rechtsrahmens zu einem Legitimitätsdefizit desselbigen führt. Auch die mangelhafte Berücksichtigung der speziellen Lebensrealitäten, die in vielen fragilen Staaten vorzufinden sind, wirkt sich negativ auf die Durchsetzbarkeit des internationalen Rechtsrahmens aus. Es wird dargelegt, dass unterschiedlich hohe menschenrechtliche Schutzstandards zu Normenkollisionen bei der internationalen Zusammenarbeit der Staaten führen, insbesondere im Rahmen der internationalen Rechtshilfe. Da gerade fragile Staaten häufig durch eine defizitäre menschenrechtliche Situation gekennzeichnet sind, stellt dies konsolidierte Staaten im Rahmen der Zusammenarbeit mit fragilen Staaten öfters vor Herausforderungen. Schließlich wird aufgezeigt, dass auch die extraterritoriale Jurisdiktion und somit die strafrechtliche Verfolgung transnationaler Straftaten durch Drittstaaten mit rechtlichen und praktischen Problemen einhergeht.
In einem letzten Kapitel der Arbeit wird der Frage nachgegangen, ob nicht ein alternativer Strafverfolgungsmechanismus geschaffen werden sollte, um transnationale Kriminalitätsphänomene zu verfolgen, die aus fragilen Staaten heraus begangen werden und wie ein solch alternativer Strafverfolgungsmechanismus konkret ausgestaltet sein sollte.
Digital Fashion
(2024)
Das virtuelle Kleid als mediale und soziokulturelle Alltagserscheinung der Gegenwart bildet den Gegenstand der vorliegenden interdisziplinären Unter-suchung. An der Schnittstelle zwischen Menschen, Medien und Mode ist das virtuelle Kleid an unrealen Orten und in synthetischen Situationen ausschließlich auf einem Screen erfahrbar. In diesem Dispositiv lassen sich Körperkonzepte, Darstellungskonventionen, soziale Handlungsmuster und Kommunikations-strategien ausmachen, die zwar auf einer radikalen Ablösung vom textilen Material beruhen, aber dennoch nicht ohne sehr konkrete Verweise auf das textile Material auskommen. Dies führt zu neuen Ansätzen der Auseinandersetzung mit Kleidern, die nun als Visualisierung gebündelter Datenpakete zu betrachten sind. Die dynamische Entwicklung neuer Erscheinungsformen und deren nahtlose Einbindung in traditionelle Geschäftsmodelle und bestehende Modekonzepte macht eine Positionsbestimmung notwendig, insbesondere im Hinblick auf gegenwärtige Nachhaltigkeitsdiskurse um immaterielle Produkte. Für diese Studie liefern die hinter den Bildern liegenden Prozesse der ökonomischen Ausrichtung, der Herstellung, der Verwendung und der Rezeption den methodologischen Zugang für die Analyse. Mithilfe eines typologisierenden Instrumentariums wird aus der Vielzahl und Vielfalt der Darstellungen ein Set an forschungsleitenden Beispielen zusammengestellt, welche dann in einer mehrstufigen Kontextanalyse zu einer begrifflichen Fassung des virtuellen Kleides sowie zu fünf Kontexteinheiten führen. Am Beispiel des virtuellen Kleides zeichnet diese Untersuchung den technischen, gesellschaftlichen und sozialen Wandel nach und arbeitet seine Bedeutung für zukünftige Modeentwicklungen heraus. Damit leistet die Untersuchung einen Beitrag zur medien- und sozialwissenschaftlichen Modeforschung der Gegenwart.
This thesis presents a comprehensive exploration of the application of DNA origami nanofork antennas (DONAs) in the field of spectroscopy, with a particular focus on the structural analysis of Cytochrome C (CytC) at the single-molecule level. The research encapsulates the design, optimization, and application of DONAs in enhancing the sensitivity and specificity of Raman spectroscopy, thereby offering new insights into protein structures and interactions.
The initial phase of the study involved the meticulous optimization of DNA origami structures. This process was pivotal in developing nanoscale tools that could significantly enhance the capabilities of Raman spectroscopy. The optimized DNA origami nanoforks, in both dimer and aggregate forms, demonstrated an enhanced ability to detect and analyze molecular vibrations, contributing to a more nuanced understanding of protein dynamics.
A key aspect of this research was the comparative analysis between the dimer and aggregate forms of DONAs. This comparison revealed that while both configurations effectively identified oxidation and spin states of CytC, the aggregate form offered a broader range of detectable molecular states due to its prolonged signal emission and increased number of molecules. This extended duration of signal emission in the aggregates was attributed to the collective hotspot area, enhancing overall signal stability and sensitivity.
Furthermore, the study delved into the analysis of the Amide III band using the DONA system. Observations included a transient shift in the Amide III band's frequency, suggesting dynamic alterations in the secondary structure of CytC. These shifts, indicative of transitions between different protein structures, were crucial in understanding the protein’s functional mechanisms and interactions.
The research presented in this thesis not only contributes significantly to the field of spectroscopy but also illustrates the potential of interdisciplinary approaches in biosensing. The use of DNA origami-based systems in spectroscopy has opened new avenues for research, offering a detailed and comprehensive understanding of protein structures and interactions. The insights gained from this research are expected to have lasting implications in scientific fields ranging from drug development to the study of complex biochemical pathways. This thesis thus stands as a testament to the power of integrating nanotechnology, biochemistry, and spectroscopic techniques in addressing complex scientific questions.
Mantodea, commonly known as mantids, have captivated researchers owing to their enigmatic behavior and ecological significance. This order comprises a diverse array of predatory insects, boasting over 2,400 species globally and inhabiting a wide spectrum of ecosystems. In Iran, the mantid fauna displays remarkable diversity, yet numerous facets of this fauna remain poorly understood, with a significant dearth of systematic and ecological research. This substantial knowledge gap underscores the pressing need for a comprehensive study to advance our understanding of Mantodea in Iran and its neighboring regions.
The principal objective of this investigation was to delve into the ecology and phylogeny of Mantodea within these areas. To accomplish this, our research efforts concentrated on three distinct genera within Iranian Mantodea. These genera were selected due to their limited existing knowledge base and feasibility for in-depth study. Our comprehensive methodology encompassed a multifaceted approach, integrating morphological analysis, molecular techniques, and ecological observations.
Our research encompassed a comprehensive revision of the genus Holaptilon, resulting in the description of four previously unknown species. This extensive effort substantially advanced our understanding of the ecological roles played by Holaptilon and refined its systematic classification. Furthermore, our investigation into Nilomantis floweri expanded its known distribution range to include Iran. By conducting thorough biological assessments, genetic analyses, and ecological niche modeling, we obtained invaluable insights into distribution patterns and genetic diversity within this species. Additionally, our research provided a thorough comprehension of the life cycle, behaviors, and ecological niche modeling of Blepharopsis mendica, shedding new light on the distinctive characteristics of this mantid species. Moreover, we contributed essential knowledge about parasitoids that infect mantid ootheca, laying the foundation for future studies aimed at uncovering the intricate mechanisms governing ecological and evolutionary interactions between parasitoids and Mantodea.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
Efraim Frisch (1873–1942) und Albrecht Mendelssohn Bartholdy (1874–1936) waren im klassischen Zeitalter der Intellektuellen (neben anderem) Zeitschriftenentrepeneure und Gründer der kleinen Zeitschriften Der Neue Merkur (1914–1916/1919–1925) und Europäische Gespräche (1923–1933). Sie stehen (nicht nur mit ihren Zeitschriften) für einen der wiederholt in der Moderne unternommenen Versuche, die in der Aufklärung erschlossenen Ressourcen – demokratischer Republikanismus und universelle und gleiche Rechte für alle Menschen – im Vertrauen auf ihre globale Umsetzbarkeit zu aktivieren. In der Zeit der Weimarer Republik gehörten sie zu den Republikanern, „die Weimar als Symbol ernst nahmen und zäh und mutig bemüht waren, dem Ideal konkreten Inhalt zu verleihen“ (Peter Gay). Ihr bislang unüberliefert gebliebenes Beispiel fügt sich ein in die Demokratiegeschichte der europäischen Moderne, in die Geschichte internationaler Gesellschaftsbeziehungen und die Geschichte der Selbstbehauptung intellektueller Autonomie.
Die zäsurenübergreifend den Zeitraum von 1900 bis ca. 1940 untersuchende Studie ermöglicht wesentliche Einblicke in die Biografien Frischs und Mendelssohn Bartholdys, in die deutsch-französische/europäisch-transatlantische Welt der kleinen (literarisch-politischen) Zeitschriften des frühen 20. Jahrhunderts sowie in das medien-intellektuelle Feld des späten Kaiserreiches und der Weimarer Republik in seiner humanistisch-demokratisch-republikanischen Tendenz. Darüber hinaus beinhaltet sie neue Erkenntnisse zur Geschichte der ‚Heidelberger Vereinigung‘ – der Arbeitsgemeinschaft für eine Politik des Rechts – um Prinz Max von Baden, zur deutschen Friedensdelegation in Versailles 1919 und ihrem Hamburger Nachleben, zum Handbuch der Politik sowie zur ersten amtlichen Aktenpublikation des Auswärtigen Amtes – der Großen Politik der Europäischen Kabinette 1871–1914. Schließlich zu den Bemühungen der ‚Internationalists‘ der 1920er Jahre, eine effektive Ächtung des Angriffskrieges herbeizuführen.
Astrophysical shocks, driven by explosive events such as supernovae, efficiently accelerate charged particles to relativistic energies. The majority of these shocks occur in collisionless plasmas where the energy transfer is dominated by particle-wave interactions.Strong nonrelativistic shocks found in supernova remnants are plausible sites of galactic cosmic ray production, and the observed emission indicates the presence of nonthermal electrons. To participate in the primary mechanism of energy gain - Diffusive Shock Acceleration - electrons must have a highly suprathermal energy, implying a need for very efficient pre-acceleration. This poorly understood aspect of the shock acceleration theory is known as the electron injection problem. Studying electron-scale phenomena requires the use of fully kinetic particle-in-cell (PIC) simulations, which describe collisionless plasma from first principles.
Most published studies consider a homogenous upstream medium, but turbulence is ubiquitous in astrophysical environments and is typically driven at magnetohydrodynamic scales, cascading down to kinetic scales. For the first time, I investigate how preexisting turbulence affects electron acceleration at nonrelativistic shocks using the fully kinetic approach. To accomplish this, I developed a novel simulation framework that allows the study of shocks propagating in turbulent media. It involves simulating slabs of turbulent plasma separately, which are further continuously inserted into a shock simulation. This demands matching of the plasma slabs at the interface. A new procedure of matching electromagnetic fields and currents prevents numerical transients, and the plasma evolves self-consistently. The versatility of this framework has the potential to render simulations more consistent with turbulent systems in various astrophysical environments.
In this Thesis, I present the results of 2D3V PIC simulations of high-Mach-number nonrelativistic shocks with preexisting compressive turbulence in an electron-ion plasma. The chosen amplitudes of the density fluctuations ($\lesssim15\%$) concord with \textit{in situ} measurements in the heliosphere and the local interstellar medium. I explored how these fluctuations impact the dynamics of upstream electrons, the driving of the plasma instabilities, electron heating and acceleration. My results indicate that while the presence of the turbulence enhances variations in the upstream magnetic field, their levels remain too low to influence the behavior of electrons at perpendicular shocks significantly. However, the situation is different at oblique shocks. The external magnetic field inclined at an angle between $50^\circ \lesssim \theta_\text{Bn} \lesssim 75^\circ$ relative to the shock normal allows the escape of fast electrons toward the upstream region. An extended electron foreshock region is formed, where these particles drive various instabilities. Results of an oblique shock with $\theta_\text{Bn}=60^\circ$ propagating in preexisting compressive turbulence show that the foreshock becomes significantly shorter, and the shock-reflected electrons have higher temperatures. Furthermore, the energy spectrum of downstream electrons shows a well-pronounced nonthermal tail that follows a power law with an index up to -2.3.
The methods and results presented in this Thesis could serve as a starting point for more realistic modeling of interactions between shocks and turbulence in plasmas from first principles.
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Prediction is often regarded as a central and domain-general aspect of cognition. This proposal extends to language, where predictive processing might enable the comprehension of rapidly unfolding input by anticipating upcoming words or their semantic features. To make these predictions, the brain needs to form a representation of the predictive patterns in the environment. Predictive processing theories suggest a continuous learning process that is driven by prediction errors, but much is still to be learned about this mechanism in language comprehension. This thesis therefore combined three electroencephalography (EEG) experiments to explore the relationship between prediction and implicit learning at the level of meaning.
Results from Study 1 support the assumption that the brain constantly infers und updates probabilistic representations of the semantic context, potentially across multiple levels of complexity. N400 and P600 brain potentials could be predicted by semantic surprise based on a probabilistic estimate of previous exposure and a more complex probability representation, respectively.
Subsequent work investigated the influence of prediction errors on the update of semantic predictions during sentence comprehension. In line with error-based learning, unexpected sentence continuations in Study 2 ¬– characterized by large N400 amplitudes ¬– were associated with increased implicit memory compared to expected continuations. Further, Study 3 indicates that prediction errors not only strengthen the representation of the unexpected word, but also update specific predictions made from the respective sentence context. The study additionally provides initial evidence that the amount of unpredicted information as reflected in N400 amplitudes drives this update of predictions, irrespective of the strength of the original incorrect prediction.
Together, these results support a central assumption of predictive processing theories: A probabilistic predictive representation at the level of meaning that is updated by prediction errors. They further propose the N400 ERP component as a possible learning signal. The results also emphasize the need for further research regarding the role of the late positive ERP components in error-based learning. The continuous error-based adaptation described in this thesis allows the brain to improve its predictive representation with the aim to make better predictions in the future.
Eskalation des Commitments in Wirtschaftsinformatik Projekten: eine kognitiv-affektive Perspektive
(2024)
Projekte im Bereich der Wirtschaftsinformatik (IS-Projekte) sind von zentraler Bedeutung für die Steuerung von Unternehmensstrategien und die Aufrechterhaltung von Wettbewerbsvorteilen, überschreiten jedoch häufig das Budget, sprengen den Zeitrahmen und weisen eine hohe Misserfolgsquote auf. Diese Dissertation befasst sich mit den psychologischen Grundlagen menschlichen Verhaltens - insbesondere Kognition und Emotion - im Zusammenhang mit einem weit verbreiteten Problem im IS-Projektmanagement: der Tendenz, an fehlgehenden Handlungssträngen festzuhalten, auch Eskalation des Commitments (Englisch: “escalation of commitment” - EoC) genannt.
Mit einem kombinierten Forschungsansatz (dem Mix von qualitativen und quantitativen Methoden) untersuche ich in meiner Dissertation die emotionalen und kognitiven Grundlagen der Entscheidungsfindung hinter eskalierendem Commitment zu scheiternden IS-Projekten und deren Entwicklung über die Zeit. Die Ergebnisse eines psychophysiologischen Laborexperiments liefern Belege auf die Vorhersagen bezüglich der Rolle von negativen und komplexen situativen Emotionen der kognitiven Dissonanz Theorie gegenüber der Coping-Theorie und trägt zu einem besseren Verständnis dafür bei, wie sich Eskalationstendenzen während sequenzieller Entscheidungsfindung aufgrund kognitiver Lerneffekte verändern. Mit Hilfe psychophysiologischer Messungen, einschließlich der Daten-Triangulation zwischen elektrodermaler und kardiovaskulärer Aktivität sowie künstliche Intelligenz-basierter Analyse von Gesichtsmikroexpressionen, enthüllt diese Forschung physiologische Marker für eskalierendes Commitment. Ergänzend zu dem Experiment zeigt eine qualitative Analyse text-basierter Reflexionen während der Eskalationssituationen, dass Entscheidungsträger verschiedene kognitive Begründungsmuster verwenden, um eskalierende Verhaltensweisen zu rechtfertigen, die auf eine Sequenz von vier unterschiedlichen kognitiven Phasen schließen lassen.
Durch die Integration von qualitativen und quantitativen Erkenntnissen entwickelt diese Dissertation ein umfassendes theoretisches Model dafür, wie Kognition und Emotion eskalierendes Commitment über die Zeit beeinflussen. Ich schlage vor, dass eskalierendes Commitment eine zyklische Anpassung von Denkmodellen ist, die sich durch Veränderungen in kognitiven Begründungsmustern, Variationen im zeitlichen Kognitionsmodus und Interaktionen mit situativen Emotionen und deren Erwartung auszeichnet. Der Hauptbeitrag dieser Arbeit liegt in der Entflechtung der emotionalen und kognitiven Mechanismen, die eskalierendes Commitment im Kontext von IS-Projekten antreiben. Die Erkenntnisse tragen dazu bei, die Qualität von Entscheidungen unter Unsicherheit zu verbessern und liefern die Grundlage für die Entwicklung von Deeskalationsstrategien. Beteiligte an „in Schieflage geratenden“ IS-Projekten sollten sich der Tendenz auf fehlgeschlagenen Aktionen zu beharren und der Bedeutung der zugrundeliegenden emotionalen und kognitiven Dynamiken bewusst sein.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
Additive manufacturing (AM) processes enable the production of metal structures with exceptional design freedom, of which laser powder bed fusion (PBF-LB) is one of the most common. In this process, a laser melts a bed of loose feedstock powder particles layer-by-layer to build a structure with the desired geometry. During fabrication, the repeated melting and rapid, directional solidification create large temperature gradients that generate large thermal stress. This thermal stress can itself lead to cracking or delamination during fabrication. More often, large residual stresses remain in the final part as a footprint of the thermal stress. This residual stress can cause premature distortion or even failure of the part in service. Hence, knowledge of the residual stress field is critical for both process optimization and structural integrity.
Diffraction-based techniques allow the non-destructive characterization of the residual stress fields. However, such methods require a good knowledge of the material of interest, as certain assumptions must be made to accurately determine residual stress. First, the measured lattice plane spacings must be converted to lattice strains with the knowledge of a strain-free material state. Second, the measured lattice strains must be related to the macroscopic stress using Hooke's law, which requires knowledge of the stiffness of the material. Since most crystal structures exhibit anisotropic material behavior, the elastic behavior is specific to each lattice plane of the single crystal. Thus, the use of individual lattice planes in monochromatic diffraction residual stress analysis requires knowledge of the lattice plane-specific elastic properties. In addition, knowledge of the microstructure of the material is required for a reliable assessment of residual stress.
This work presents a toolbox for reliable diffraction-based residual stress analysis. This is presented for a nickel-based superalloy produced by PBF-LB. First, this work reviews the existing literature in the field of residual stress analysis of laser-based AM using diffraction-based techniques. Second, the elastic and plastic anisotropy of the nickel-based superalloy Inconel 718 produced by PBF-LB is studied using in situ energy dispersive synchrotron X-ray and neutron diffraction techniques. These experiments are complemented by ex situ material characterization techniques. These methods establish the relationship between the microstructure and texture of the material and its elastic and plastic anisotropy. Finally, surface, sub-surface, and bulk residual stress are determined using a texture-based approach. Uncertainties of different methods for obtaining stress-free reference values are discussed.
The tensile behavior in the as-built condition is shown to be controlled by texture and cellular sub-grain structure, while in the heat-treated condition the precipitation of strengthening phases and grain morphology dictate the behavior. In fact, the results of this thesis show that the diffraction elastic constants depend on the underlying microstructure, including texture and grain morphology. For columnar microstructures in both as-built and heat-treated conditions, the diffraction elastic constants are best described by the Reuss iso-stress model. Furthermore, the low accumulation of intergranular strains during deformation demonstrates the robustness of using the 311 reflection for the diffraction-based residual stress analysis with columnar textured microstructures. The differences between texture-based and quasi-isotropic approaches for the residual stress analysis are shown to be insignificant in the observed case. However, the analysis of the sub-surface residual stress distributions show, that different scanning strategies result in a change in the orientation of the residual stress tensor. Furthermore, the location of the critical sub-surface tensile residual stress is related to the surface roughness and the microstructure. Finally, recommendations are given for the diffraction-based determination and evaluation of residual stress in textured additively manufactured alloys.
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
Volcanic hydrothermal systems are an integral part of most volcanoes and typically involve a heat source, adequate fluid supply, and fracture or pore systems through which the fluids can circulate within the volcanic edifice. Associated with this are subtle but powerful processes that can significantly influence the evolution of volcanic activity or the stability of the near-surface volcanic system through mechanical weakening, permeability reduction, and sealing of the affected volcanic rock. These processes are well constrained for rock samples by laboratory analyses but are still difficult to extrapolate and evaluate at the scale of an entire volcano. Advances in unmanned aircraft systems (UAS), sensor technology, and photogrammetric processing routines now allow us to image volcanic surfaces at the centimeter scale and thus study volcanic hydrothermal systems in great detail. This thesis aims to explore the potential of UAS approaches for studying the structures, processes, and dynamics of volcanic hydrothermal systems but also to develop methodological approaches to uncover secondary information hidden in the data, capable of indicating spatiotemporal dynamics or potentially critical developments associated with hydrothermal alteration. To accomplish this, the thesis describes the investigation of two near-surface volcanic hydrothermal systems, the El Tatio geyser field in Chile and the fumarole field of La Fossa di Vulcano (Italy), both of which are among the best-studied sites of their kind. Through image analysis, statistical, and spatial analyses we have been able to provide the most detailed structural images of both study sites to date, with new insights into the driving forces of such systems but also revealing new potential controls, which are summarized in conceptual site-specific models. Furthermore, the thesis explores methodological remote sensing approaches to detect, classify and constrain hydrothermal alteration and surface degassing from UAS-derived data, evaluated them by mineralogical and chemical ground-truthing, and compares the alteration pattern with the present-day degassing activity. A significant contribution of the often neglected diffuse degassing activity to the total amount of degassing is revealed and constrains secondary processes and dynamics associated with hydrothermal alteration that lead to potentially critical developments like surface sealing. The results and methods used provide new approaches for alteration research, for the monitoring of degassing and alteration effects, and for thermal monitoring of fumarole fields, with the potential to be incorporated into volcano monitoring routines.
Heat stress (HS) is a major abiotic stress that negatively affects plant growth and productivity. However, plants have developed various adaptive mechanisms to cope with HS, including the acquisition and maintenance of thermotolerance, which allows them to respond more effectively to subsequent stress episodes. HS memory includes type II transcriptional memory which is characterized by enhanced re-induction of a subset of HS memory genes upon recurrent HS. In this study, new regulators of HS memory in A. thaliana were identified through the characterization of rein mutants.
The rein1 mutant carries a premature stop in CYCLIN-DEPENDENT-KINASE 8 (CDK8) which is part of the cyclin kinase module of the Mediator complex. Rein1 seedlings show impaired type II transcriptional memory in multiple heat-responsive genes upon re-exposure to HS. Additionally, the mutants exhibit a significant deficiency in HS memory at the physiological level. Interaction studies conducted in this work indicate that CDK8 associates with the memory HEAT SHOCK FACTORs HSAF2 and HSFA3. The results suggest that CDK8 plays a crucial role in HS memory in plants together with other memory HSFs, which may be potential targets of the CDK8 kinase function. Understanding the role and interaction network of the Mediator complex during HS-induced transcriptional memory will be an exciting aspect of future HS memory research.
The second characterized mutant, rein2, was selected based on its strongly impaired pAPX2::LUC re-induction phenotype. In gene expression analysis, the mutant revealed additional defects in the initial induction of HS memory genes. Along with this observation, basal thermotolerance was impaired similarly as HS memory at the physiological level in rein2. Sequencing of backcrossed bulk segregants with subsequent fine mapping narrowed the location of REIN2 to a 1 Mb region on chromosome 1. This interval contains the At1g65440 gene, which encodes the histone chaperone SPT6L. SPT6L interacts with chromatin remodelers and bridges them to the transcription machinery to regulate nucleosome and Pol II occupancy around the transcriptional start site. The EMS-induced missense mutation in SPT6L may cause altered HS-induced gene expression in rein2, possibly triggered by changes in the chromatin environment resulting from altered histone chaperone function.
Expanding research on screen-derived factors that modify type II transcriptional memory has the potential to enhance our understanding of HS memory in plants. Discovering connections between previously identified memory factors will help to elucidate the underlying network of HS memory. This knowledge can initiate new approaches to improve heat resilience in crops.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
Der Semi-Parlamentarismus beschreibt das Regierungssystem, in dem die Regierung von einem Teil des Parlaments gewählt wird und abberufen werden kann, von einem anderen Teil des Parlaments aber unabhängig ist. Beide Kammern müssen dabei der Gesetzgebung zustimmen. Dieses von Steffen Ganghof klassifizierte System ergänzt gängige Regierungssystemtypologien, wie sie beispielsweise von David Samuels und Matthew Shugart genutzt werden. Der Semi-Parlamentarismus ist der logische Gegenpart zum Semi-Präsidentialismus, bei dem nur ein Teil der Exekutive von der Legislative abhängt, während im Semi-Parlamentarismus die Exekutive von nur einem Teil der Legislative abhängt. Der Semi-Parlamentarismus verkörpert so ein System der Gewaltenteilung ohne einen exekutiven Personalismus, wie er durch die Direktwahl und Unabhängigkeit der Regierungchef:in im Präsidentialismus hervorgerufen wird. Dadurch ist der Semi-Parlamentarismus geeignet, Unterschiede zwischen Parlamentarismus und Präsidentialismus auf den separaten Einfluss der Gewaltenteilung und des exekutiven Personalismus zurückzuführen. Die Untersuchung des Semi-Parlamentarismus ist daher für die Regierungssystemliteratur insgesamt von Bedeutung. Der Semi-Parlamentarismus ist dabei kein rein theoretisches Konstrukt, sondern existiert im australischen Bundesstaat, den australischen Substaaten und Japan.
Die vorliegende Dissertation untersucht erstmals umfassend die Gesetzgebung der semi-parlamentarischen Staaten als solchen. Der Fokus liegt dabei auf den zweiten Kammern, da diese durch die Unabhängigkeit von der Regierung der eigentliche Ort der Gesetzgebung sind. Die Gesetzgebung in Parlamentarismus und Präsidentialismus unterscheidet sich insbesondere in der Geschlossenheit der Parteien, der Koalitionsbildung und dem legislativen Erfolg der Regierungen. Diese Punkte sind daher auch von besonderem Interesse bei der Analyse des Semi-Parlamentarismus. Die semi-parlamentarischen Staaten unterscheiden sich auch untereinander teilweise erheblich in der institutionellen Ausgestaltung wie den Wahlsystemen oder den verfügbaren Mitteln zur Überwindung von Blockadesituationen. Die Darstellung und die Analyse der Auswirkungen dieser Unterschiede auf die Gesetzgebung ist neben dem Vergleich des Semi-Parlamentarismus mit anderen Systemen das zweite wesentliche Ziel dieser Arbeit.
Als Fundament der Analyse habe ich einen umfangreichen Datensatz erhoben, der alle Legislaturperioden der australischen Staaten zwischen 1997 und 2019 umfasst. Wesentliche Bestandteile des Datensatzes sind alle namentlichen Abstimmungen beider Kammern, alle
eingebrachten und verabschiedeten Gesetzen der Regierung sowie die mit Hilfe eines Expert-Surveys erhobenen Parteipositionen in den relevanten Politikfeldern auf substaatlicher Ebene.
Hauptsächlich mit der Hilfe von Mixed-Effects- und Fractional-Response-Analysen kann ich so zeigen, dass der Semi-Parlamentarismus in vielen Aspekten eher parlamentarischen als präsidentiellen Systemen gleicht. Nur die Koalitionsbildung erfolgt deutlich flexibler und unterscheidet sich daher von der typischen parlamentarischen Koalitionsbildung. Die Analysen legen nahe, dass wesentliche Unterschiede zwischen Parlamentarismus und Präsidentialismus eher auf den exekutiven Personalismus als auf die Gewaltenteilung zurückzuführen sind.
Zwischen den semi-parlamentarischen Staaten scheinen vor allem die Kontrolle des Medians beider Parlamentskammern durch die Regierung und die Möglichkeit der Regierung, die zweite Kammer mitaufzulösen, zu entscheidenden Unterschieden in der Gesetzgebung zu führen. Die Kontrolle des Medians ermöglicht eine flexible Koalitionsbildung und führt zu höheren legislativen Erfolgsraten. Ebenso führt eine möglichst leichte Auflösungsmöglichkeit der zweiten Kammern zu höheren legislativen Erfolgsraten. Die Parteigeschlossenheit ist unabhängig von diesen Aspekten in beiden Kammern der semi-parlamentarischen Parlamente sehr hoch.
In dieser Arbeit wurde eine reaktive Wand in einem kleinskaligen Laborma\ss stab (Länge~=~40\,cm) entwickelt, die Eisen- und Sulfatbelastungen aus sauren Minenabwässern (engl. \textit{acid mine drainage} (AMD)) mit einer Effizienz von bis zu 30.2 bzw. 24.2\,\% über einen Zeitraum von 146~Tagen (50\,pv) abreinigen können sollte. Als reaktives Material wurde eine Mischung aus Gartenkompost, Buchenholz, Kokosnussschale und Calciumcarbonat verwendet. Die Zugabebedingungen waren eine Eisenkonzentration von 1000\,mg/L, eine Sulfatkonzentration von 3000\,mg/L und ein pH-Wert von 6.2.
Unterschiede in der Materialzusammensetzung ergaben keine grö\ss eren Änderungen in der Sanierungseffizienz von Eisen- und Sulfatbelastungen (12.0 -- 15.4\,\% bzw. 7.0 -- 10.1\,\%) über einen Untersuchungszeitraum von 108~Tagen (41 -- 57\,pv). Der wichtigste Einflussfaktor auf die Abreinigungsleistung von Sulfat- und Eisenbelastungen war die Verweilzeit der AMD-Lösung im reaktiven Material. Diese kann durch eine Verringerung des Durchflusses oder eine Erhöhung der Länge der reaktiven Wand (engl. \textit{Permeable Reactive Barrier} (PRB)) erhöht werden. Ein halbierter Durchfluss erhöhte die Sanierungseffizienzen von Eisen und Sulfat auf 23.4 bzw. 32.7\,\%. Weiterhin stieg die Sanierungseffizienz der Eisenbelastungen auf 24.2\,\% bei einer Erhöhung der Sulfatzugabekonzentration auf 6000\,mg/L. Saure Startbedingungen (pH~=~2.2) konnten, durch das Calciumcarbonat im reaktiven Material, über einen Zeitraum von 47~Tagen (24\,pv) neutralisiert werden. Durch die Neutralisierung der sauren Startbedingungen wurde Calciumcarbonat in der \gls{prb} verbraucht und Calcium-Ionen freigesetzt, die die Sulfatsanierungseffizienz erhöht haben (24.9\,\%). Aufgrund einer Vergrö\ss erung der \gls{prb} in Breite und Tiefe und einer 2D-Parameterbestimmung konnten Randläufigkeiten beobachtet werden, ohne deren Einfluss sich die Sanierungseffizienz für Eisen- und Sulfatbelastungen erhöht (30.2 bzw. 24.2\,\%). \par
Zur \textit{in-situ} Überwachung der \gls{prb} wurden optische Sensoren verwendet, um den pH-Wert, die Sauerstoffkonzentration und die Temperatur zu ermitteln. Es wurden, nach dem Ort und der Zeit aufgelöst, stabile Sauerstoffkonzentrationen und pH-Verläufe detektiert. Auch die Temperatur konnte nach dem Ort aufgelöst ermittelt werden. Damit zeigte diese Arbeit, dass optische Sensoren zur Überwachung der Stabilität einer \gls{prb} für die Reinigung von \gls{amd} verwendet werden können. \par
Mit dem Simulationsprogramm MIN3P wurde eine Simulation erstellt, die die entwickelte PRB darstellt. Die Simulation kann die erhaltenen Laborergebnisse gut wiedergeben. Anschlie\ss end wurde eine simulierte \gls{prb} bei unterschiedlichen Filtergeschwindigkeiten ((4.0 -- 23.5)~$\cdot~\mathrm{10^{-7}}$\,m/s) und Längen der PRB (25 -- 400\,cm) untersucht. Es wurden Zusammenhänge der untersuchten Parameter mit der Sanierungseffizienz von Eisen- und Sulfatbelastungen ermittelt. Diese Zusammenhänge können verwendet werden, um die benötigte Verweilzeit der AMD-Lösung in einem zukünftigen PRB-System, die für die maximal mögliche Sanierungsleistung notwendig ist, zu berechnen.