Refine
Has Fulltext
- yes (159) (remove)
Year of publication
- 2022 (159) (remove)
Document Type
- Doctoral Thesis (159) (remove)
Is part of the Bibliography
- yes (159)
Keywords
- Klimawandel (5)
- climate change (5)
- Digitalisierung (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- modelling (3)
- Adipositas (2)
- Arbeitszufriedenheit (2)
- Bewegungsökologie (2)
- Bundeswehr (2)
Institute
- Institut für Biochemie und Biologie (27)
- Extern (26)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Wirtschaftswissenschaften (7)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (5)
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
The aim of this dissertation was to conduct a larger-scale cross-linguistic empirical investigation of similarity-based interference effects in sentence comprehension.
Interference studies can offer valuable insights into the mechanisms that are involved in long-distance dependency completion.
Many studies have investigated similarity-based interference effects, showing that syntactic and semantic information are employed during long-distance dependency formation (e.g., Arnett & Wagers, 2017; Cunnings & Sturt, 2018; Van Dyke, 2007, Van Dyke & Lewis, 2003; Van Dyke & McElree, 2011). Nevertheless, there are some important open questions in the interference literature that are critical to our understanding of the constraints involved in dependency resolution.
The first research question concerns the relative timing of syntactic and semantic interference in online sentence comprehension. Only few interference studies have investigated this question, and, to date, there is not enough data to draw conclusions with regard to their time course (Van Dyke, 2007; Van Dyke & McElree, 2011).
Our first cross-linguistic study explores the relative timing of syntactic and semantic interference in two eye-tracking reading experiments that implement the study design used in Van Dyke (2007). The first experiment tests English sentences. The second, larger-sample experiment investigates the two interference types in German.
Overall, the data suggest that syntactic and semantic interference can arise simultaneously during retrieval.
The second research question concerns a special case of semantic interference: We investigate whether cue-based retrieval interference can be caused by semantically similar items which are not embedded in a syntactic structure.
This second interference study builds on a landmark study by Van Dyke & McElree (2006). The study design used in their study is unique in that it is able to pin down the source of interference as a consequence of cue overload during retrieval, when semantic retrieval cues do not uniquely match the retrieval target. Unlike most other interference studies, this design is able to rule out encoding interference as an alternative explanation. Encoding accounts postulate that it is not cue overload at the retrieval site but the erroneous encoding of similar linguistic items in memory that leads to interference (Lewandowsky et al., 2008; Oberauer & Kliegl, 2006). While Van Dyke & McElree (2006) reported cue-based retrieval interference from sentence-external distractors, the evidence for this effect was weak. A subsequent study did not show interference of this type (Van Dyke et al., 2014). Given these inconclusive findings, further research is necessary to investigate semantic cue-based retrieval interference.
The second study in this dissertation provides a larger-scale cross-linguistic investigation of cue-based retrieval interference from sentence-external items. Three larger-sample eye-tracking studies in English, German, and Russian tested cue-based interference in the online processing of filler-gap dependencies. This study further extends the previous research by investigating interference in each language under varying task demands (Logačev & Vasishth, 2016; Swets et al., 2008).
Overall, we see some very modest support for proactive cue-based retrieval interference in English. Unexpectedly, this was observed only under a low task demand. In German and Russian, there is some evidence against the interference effect. It is possible that interference is attenuated in languages with richer case marking.
In sum, the cross-linguistic experiments on the time course of syntactic and semantic interference from sentence-internal distractors support existing evidence of syntactic and semantic interference during sentence comprehension. Our data further show that both types of interference effects can arise simultaneously. Our cross-linguistic experiments investigating semantic cue-based retrieval interference from sentence-external distractors suggest that this type of interference may arise only in specific linguistic contexts.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
The present dissertation conducts empirical research on the relationship between urban life and its economic costs, especially for the environment. On the one hand, existing gaps in research on the influence of population density on air quality are closed and, on the other hand, innovative policy measures in the transport sector are examined that are intended to make metropolitan areas more sustainable. The focus is on air pollution, congestion and traffic accidents, which are important for general welfare issues and represent significant cost factors for urban life. They affect a significant proportion of the world's population. While 55% of the world's people already lived in cities in 2018, this share is expected to reach approximately 68% by 2050.
The four self-contained chapters of this thesis can be divided into two sections: Chapters 2 and 3 provide new causal insights into the complex interplay between urban structures and air pollution. Chapters 4 and 5 then examine policy measures to promote non-motorised transport and their influence on air quality as well as congestion and traffic accidents.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.
The development of speaking competence is widely regarded as a central aspect of second language (L2) learning. It may be questioned, however, if the currently predominant ways of conceptualising the term fully satisfy the complexity of the construct: Although there is growing recognition that language primarily constitutes a tool for communication and participation in social life, as yet it is rare for conceptualisations of speaking competence to incorporate the ability to inter-act and co-construct meaning with co-participants. Accordingly, skills allowing for the successful accomplishment of interactional tasks (such as orderly speaker change, and resolving hearing and understanding trouble) also remain largely unrepresented in language teaching and assessment. As fostering the ability to successfully use the L2 within social interaction should arguably be a main objective of language teaching, it appears pertinent to broaden the construct of speaking competence by incorporating interactional competence (IC). Despite there being a growing research interest in the conceptualisation and development of (L2) IC, much of the materials and instruments required for its teaching and assessment, and thus for fostering a broader understanding of speaking competence in the L2 classroom, still await development. This book introduces an approach to the identification of candidate criterial features for the assessment of EFL learners’ L2 repair skills. Based on a corpus of video-recorded interaction between EFL learners, and following conversation-analytic and interactional-linguistic methodology as well as drawing on basic premises of research in the framework of Conversation Analysis for Second Language Acquisition, differences between (groups of) learners in terms of their L2 repair conduct are investigated through qualitative and inductive analyses. Candidate criterial features are derived from the analysis results. This book does not only contribute to the operationalisation of L2 IC (and of L2 repair skills in particular), but also lays groundwork for the construction of assessment scales and rubrics geared towards the evaluation of EFL learners’ L2 interactional skills.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
Moderne Technologien befähigen die beteiligten Akteure eines Produktionsprozesses die Informationsaufnahme, Entscheidungsfindung und -ausführung selbstständig auszuführen. Hierarchische Kontrollbeziehungen werden aufgelöst und die Entscheidungsfindung auf eine Vielzahl von Akteuren verteilt. Positive Folgen sind unter anderem die Nutzung lokaler Kompetenzen und ein schnelles Handeln vor Ort ohne (zeit-)aufwändige prozessübergreifende Planungsläufe durch eine zentrale Steuerungsinstanz. Die Bewertung der Dezentralität des Prozesses hilft beim Vergleich verschiedener Steuerungsstrategien und trägt so zur Beherrschung komplexerer Produktionsprozesse bei.
Obwohl die Kommunikationsstruktur der an der Entscheidungsfindung beteiligten Akteure zunehmend an Bedeutung gewinnt, existiert keine Methode, welche diese als Grundlage für die Operationalisierung der Dezentralität verwendet. Hier setzt diese Arbeit an. Es wird ein dreistufiges Bewertungsmodell entwickelt, dass die Dezentralität eines Produktionsprozesses auf Basis der Kommunikations- und Entscheidungsstruktur der am Prozess beteiligten, autonomen Akteure ermittelt.
Aufbauend auf einer Definition von Dezentralität von Produktionsprozessen werden Anforderungen an eine Kennzahl erhoben und - auf Basis der Kommunikationsstruktur - eine die strukturelle Autonomie der Akteure bestimmenden Kenngröße der sozialen Netzwerkanalyse ermittelt. Die Notwendigkeit der zusätzlichen Berücksichtigung der Entscheidungsstruktur wird basierend auf der Möglichkeit der Integration von Entscheidungsfindung und -ausführung begründet.
Die Differenzierung beider Faktoren bildet die Grundlage für die Klassifikation der Akteure; die Multiplikation beider Werte resultiert in dem die Autonomie eines Akteurs beschreibenden Kennwert tatsächliche Autonomie, welcher das Ergebnis der ersten Stufe des Modells darstellt. Homogene Akteurswerte charakterisieren eine hohe Dezentralität des Prozessschrittes, welcher Betrachtungsobjekt der zweiten Stufe ist. Durch einen Vergleich der vorhandenen mit der maximal möglichen Dezentralität der Prozessschritte wird auf der dritten Stufe der Autonomie Index ermittelt, welcher die Dezentralität des Prozesses operationalisiert.
Das erstellte Bewertungsmodell wird anhand einer Simulationsstudie im Zentrum Industrie 4.0 validiert. Dafür wird das Modell auf zwei Simulationsexperimente - einmal mit einer zentralen und einmal mit einer dezentralen Steuerung - angewendet und die Ergebnisse verglichen. Zusätzlich wird es auf einen umfangreichen Produktionsprozess aus der Praxis angewendet.
Bio-sourced adsorbing poly(2-oxazoline)s mimicking mussel glue proteins for antifouling applications
(2022)
Nature developed countless systems for many applications. In maritime environments, several organisms established extra-ordinary mechanisms to attach to surfaces. Over the past years, the scientific interest to employ those mechanisms for coatings and long-lasting adhering materials gained significant attention.
This work describes the synthesis of bio-inspired adsorbing copoly(2-oxazoline)s for surface coatings with protein repelling effects, mimicking mussel glue proteins. From a set of methoxy substituted phenyl, benzyl, and cinnamyl acids, 2-oxazoline monomers were synthesized. All synthesized 2-oxazolines were analyzed by FT-IR spectroscopy, NMR spectroscopy, and EI mass spectrometry. With those newly synthesized 2-oxazoline monomers and 2-ethyl-2-oxazoline, kinetic studies concerning homo- and copolymerization in a microwave reactor were conducted. The success of the polymerization reactions was demonstrated by FT-IR spectroscopy, NMR spectroscopy, MALDI-TOF mass spectrometry, and size exclusion chromatography (SEC). The copolymerization of 2-ethyl-2-oxazoline with a selection of methoxy-substituted 2-oxazolines resulted in water-soluble copolymers. To release the adsorbing catechol and cationic units, the copoly(2-oxazoline)s were modified. The catechol units were (partially) released by a methyl aryl ether cleavage reaction. A subsequent partial acidic hydrolysis of the ethyl unit resulted in mussel glue protein-inspired catechol and cation-containing copolymers. The modified copolymers were analyzed by NMR spectroscopy, UV-VIS spectroscopy, and SEC. The catechol- and cation-containing copolymers and their precursors were examined by a Quartz Crystal Microbalance with Dissipation (QCM-D), so study the adsorption performance on gold, borosilicate, iron, and polystyrene surfaces. An exemplary study revealed that a catechol and cation-containing copoly(2-oxazoline)-coated gold surface exhibits strong protein repelling properties.
Carbohydrates are found in every living organism, where they are responsible for numerous, essential biological functions and processes. Synthetic polymers with pendant saccharides, called glycopolymers, mimic natural glycoconjugates in their special properties and functions. Employing such biomimetics furthers the understanding and controlling of biological processes. Hence, glycopolymers are valuable and interesting for applications in the medical and biological field. However, the synthesis of carbohydrate-based materials can be very challenging. In this thesis, the synthesis of biofunctional glycopolymers is presented, with the focus on aqueous-based, protecting group free and short synthesis routes to further advance in the field of glycopolymer synthesis.
A practical and versatile precursor for glycopolymers are glycosylamines. To maintain biofunctionality of the saccharides after their amination, regioselective functionalization was performed. This frequently performed synthesis was optimized for different sugars. The optimization was facilitated using a design of experiment (DoE) approach to enable a reduced number of necessary experiments and efficient procedure. Here, the utility of using DoE for optimizing the synthesis of glycosylamines is discussed.
The glycosylamines were converted to glycomonomers which were then polymerized to yield biofunctional glycopolymers. Here, the glycopolymers were aimed to be applicable as layer-by-layer (LbL) thin film coatings for drug delivery systems. To enable the LbL technique, complimentary glycopolymer electrolytes were synthesized by polymerization of the glycomonomers and subsequent modification or by post-polymerization modification. For drug delivery, liposomes were embedded into the glycopolymer coating as potential cargo carriers. The stability as well as the integrity of the glycopolymer layers and liposomes were investigated at physiological pH range.
Different glycopolymers were also synthesized to be applicable as anti-adhesion therapeutics by providing advanced architectures with multivalent presentations of saccharides, which can inhibit the binding of pathogene lectins. Here, the synthesis of glycopolymer hydrogel particles based on biocompatible poly(N-isopropylacrylamide) (NiPAm) was established using the free-radical precipitation polymerization technique. The influence of synthesis parameters on the sugar content in the gels and on the hydrogel morphology is discussed. The accessibility of the saccharides to model lectins and their enhanced, multivalent interaction were investigated.
At the end of this work, the synthesis strategies for the glycopolymers are generally discussed as well as their potential application in medicine.
Synthetische Transkriptionsfaktoren bestehen wie natürliche Transkriptionsfaktoren aus einer DNA-Bindedomäne, die sich spezifisch an die Bindestellensequenz vor dem Ziel-Gen anlagert, und einer Aktivierungsdomäne, die die Transkriptionsmaschinerie rekrutiert, sodass das Zielgen exprimiert wird. Der Unterschied zu den natürlichen Transkriptionsfaktoren ist, sowohl dass die DNA-Bindedomäne als auch die Aktivierungsdomäne wirtsfremd sein können und dadurch künstliche Stoffwechselwege im Wirt, größtenteils chemisch, induziert werden können. Optogenetische synthetische Transkriptionsfaktoren, die hier entwickelt wurden, gehen einen Schritt weiter. Dabei ist die DNA-Bindedomäne nicht mehr an die Aktivierungsdomäne, sondern mit dem Blaulicht-Photorezeptor CRY2 gekoppelt. Die Aktivierungsdomäne wurde mit dem Interaktionspartner CIB1 fusioniert. Unter Blaulichtbestrahlung dimerisieren CRY2 und CIB1 und damit einhergehend die beiden Domänen, sodass ein funktionsfähiger Transkriptionsfaktor entsteht. Dieses System wurde in die Saccharomyces cerevisiae genomisch integriert. Verifiziert wurde das konstruierte System mit Hilfe des Reporters yEGFP, welcher durchflusszytometrisch detektiert werden konnte. Es konnte gezeigt werden, dass die yEGFP Expression variabel gestaltet werden kann, indem unterschiedlich lange Blaulichtimpulse ausgesendet wurden, die DNA-Bindedomäne, die Aktivierungsdomäne oder die Anzahl der Bindestellen, an dem sich die DNA-Bindedomäne anlagert, verändert wurden. Um das System für industrielle Anwendungen attraktiv zu gestalten, wurde das System vom Deepwell-Maßstab auf Photobioreaktor-Maßstab hochskaliert. Außerdem erwies sich das Blaulichtsystem sowohl im Laborstamm YPH500 als auch im industriell oft verwendeten Hefestamm CEN.PK als funktional. Des Weiteren konnte ein industrierelevante Protein ebenso mit Hilfe des verifizierten Systems exprimiert werden. Schlussendlich konnte in dieser Arbeit das etablierte Blaulicht-System erfolgreich mit einem Rotlichtsystem kombiniert werden, was zuvor noch nicht beschrieben wurde.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
Plant metabolism is the main process of converting assimilated carbon to different crucial compounds for plant growth and therefore crop yield, which makes it an important research topic. Although major advances in understanding genetic principles contributing to metabolism and yield have been made, little is known about the genetics responsible for trait variation or canalization although the concepts have been known for a long time. In light of a growing global population and progressing climate change, understanding canalization of metabolism and yield seems ever-more important to ensure food security. Our group has recently found canalization metabolite quantitative trait loci (cmQTL) for tomato fruit metabolism, showing that the concept of canalization applies on metabolism. In this work two approaches to investigate plant metabolic canalization and one approach to investigate yield canalization are presented.
In the first project, primary and secondary metabolic data from Arabidopsis thaliana and Phaseolus vulgaris leaf material, obtained from plants grown under different conditions was used to calculate cross-environment coefficient of variations or fold-changes of metabolite levels per genotype and used as input for genome wide association studies. While primary metabolites have lower CV across conditions and show few and mostly weak associations to genomic regions, secondary metabolites have higher CV and show more, strong metabolite to genome associations. As candidate genes, both potential regulatory genes as well as metabolic genes, can be found, albeit most metabolic genes are rarely directly related to the target metabolites, suggesting a role for both potential regulatory mechanisms as well as metabolic network structure for canalization of metabolism.
In the second project, candidate genes of the Solanum lycopersicum cmQTL mapping are selected and CRISPR/Cas9-mediated gene-edited tomato lines are created, to validate the genes role in canalization of metabolism. Obtained mutants appeared to either have strong aberrant developmental phenotypes or appear wild type-like. One phenotypically inconspicuous mutant of a pantothenate kinase, selected as candidate for malic acid canalization shows a significant increase of CV across different watering conditions. Another such mutant of a protein putatively involved in amino acid transport, selected as candidate for phenylalanine canalization shows a similar tendency to increased CV without statistical significance. This potential role of two genes involved in metabolism supports the hypothesis of structural relevance of metabolism for its own stability.
In the third project, a mutant for a putative disulfide isomerase, important for thylakoid biogenesis, is characterized by a multi-omics approach. The mutant was characterized previously in a yield stability screening and showed a variegated leaf phenotype, ranging from green leaves with wild type levels of chlorophyll over differently patterned variegated to completely white leaves almost completely devoid of photosynthetic pigments. White mutant leaves show wild type transcript levels of photosystem assembly factors, with the exception of ELIP and DEG orthologs indicating a stagnation at an etioplast to chloroplast transition state. Green mutant leaves show an upregulation of these assembly factors, possibly acting as overcompensation for partially defective disulfide isomerase, which seems sufficient for proper chloroplast development as confirmed by a wild type-like proteome. Likely as a result of this phenotype, a general stress response, a shift to a sink-like tissue and abnormal thylakoid membranes, strongly alter the metabolic profile of white mutant leaves. As the severity and pattern of variegation varies from plant to plant and may be effected by external factors, the effect on yield instability, may be a cause of a decanalized ability to fully exploit the whole leaf surface area for photosynthetic activity.
Digital transformation (DT) has not only been a major challenge in recent years, it is also supposed to continue to enormously impact our society and economy in the forthcoming decade. On the one hand, digital technologies have emerged, diffusing and determining our private and professional lives. On the other hand, digital platforms have leveraged the potentials of digital technologies to provide new business models. These dynamics have a massive effect on individuals, companies, and entire ecosystems. Digital technologies and platforms have changed the way persons consume or interact with each other. Moreover, they offer companies new opportunities to conduct their business in terms of value creation (e.g., business processes), value proposition (e.g., business models), or customer interaction (e.g., communication channels), i.e., the three dimensions of DT. However, they also can become a threat for a company's competitiveness or even survival. Eventually, the emergence, diffusion, and employment of digital technologies and platforms bear the potential to transform entire markets and ecosystems.
Against this background, IS research has explored and theorized the phenomena in the context of DT in the past decade, but not to its full extent. This is not surprising, given the complexity and pervasiveness of DT, which still requires far more research to further understand DT with its interdependencies in its entirety and in greater detail, particularly through the IS perspective at the confluence of technology, economy, and society. Consequently, the IS research discipline has determined and emphasized several relevant research gaps for exploring and understanding DT, including empirical data, theories as well as knowledge of the dynamic and transformative capabilities of digital technologies and platforms for both organizations and entire industries.
Hence, this thesis aims to address these research gaps on the IS research agenda and consists of two streams. The first stream of this thesis includes four papers that investigate the impact of digital technologies on organizations. In particular, these papers study the effects of new technologies on firms (paper II.1) and their innovative capabilities (II.2), the nature and characteristics of data-driven business models (II.3), and current developments in research and practice regarding on-demand healthcare (II.4). Consequently, the papers provide novel insights on the dynamic capabilities of digital technologies along the three dimensions of DT. Furthermore, they offer companies some opportunities to systematically explore, employ, and evaluate digital technologies to modify or redesign their organizations or business models.
The second stream comprises three papers that explore and theorize the impact of digital platforms on traditional companies, markets, and the economy and society at large. At this, paper III.1 examines the implications for the business of traditional insurance companies through the emergence and diffusion of multi-sided platforms, particularly in terms of value creation, value proposition, and customer interaction. Paper III.2 approaches the platform impact more holistically and investigates how the ongoing digital transformation and "platformization" in healthcare lastingly transform value creation in the healthcare market. Paper III.3 moves on from the level of single businesses or markets to the regulatory problems that result from the platform economy for economy and society, and proposes appropriate regulatory approaches for addressing these problems. Hence, these papers bring new insights on the table about the transformative capabilities of digital platforms for incumbent companies in particular and entire ecosystems in general.
Altogether, this thesis contributes to the understanding of the impact of DT on organizations and markets through the conduction of multiple-case study analyses that are systematically reflected with the current state of the art in research. On this empirical basis, the thesis also provides conceptual models, taxonomies, and frameworks that help describing, explaining, or predicting the impact of digital technologies and digital platforms on companies, markets and the economy or society at large from an interdisciplinary viewpoint.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
It is well-known that individuals with aphasia (IWA) have difficulties understanding sentences that involve non-adjacent dependencies, such as object relative clauses or passives (Caplan, Baker, & Dehaut, 1985; Caramazza & Zurif, 1976). A large body of research supports the view that IWA’s grammatical system is intact, and that comprehension difficulties in aphasia are caused by a processing deficit, such as a delay in lexical access and/or in syntactic structure building (e.g., Burkhardt, Piñango, & Wong, 2003; Caplan, Michaud, & Hufford, 2015; Caplan, Waters, DeDe, Michaud, & Reddy, 2007; Ferrill, Love, Walenski, & Shapiro, 2012; Hanne, Burchert, De Bleser, & Vasishth, 2015; Love, Swinney, Walenski, & Zurif, 2008). The main goal of this dissertation is to computationally investigate the processing sources of comprehension impairments in sentence processing in aphasia.
In this work, prominent theories of processing deficits coming from the aphasia literature are implemented within two cognitive models of sentence processing –the activation-based model (Lewis & Vasishth, 2005) and the direct-access model (McEl- ree, 2000)–. These models are two different expressions of the cue-based retrieval theory (Lewis, Vasishth, & Van Dyke, 2006), which posits that sentence processing is the result of a series of iterative retrievals from memory. These two models have been widely used to account for sentence processing in unimpaired populations in multiple languages and linguistic constructions, sometimes interchangeably (Parker, Shvarts- man, & Van Dyke, 2017). However, Nicenboim and Vasishth (2018) showed that when both models are implemented in the same framework and fitted to the same data, the models yield different results, because the models assume different data- generating processes. Specifically, the models hold different assumptions regarding the retrieval latencies. The second goal of this dissertation is to compare these two models of cue-based retrieval, using data from individuals with aphasia and control participants. We seek to answer the following question: Which retrieval mechanism is more likely to mediate sentence comprehension?
We model 4 subsets of existing data: Relative clauses in English and German; and control structures and pronoun resolution in German. The online data come from either self-paced listening experiments, or visual-world eye-tracking experiments. The offline data come from a complementary sentence-picture matching task performed at the end of the trial in both types of experiments. The two competing models of retrieval are implemented in the Bayesian framework, following Nicenboim and Vasishth (2018). In addition, we present a modified version of the direct-acess model that – we argue – is more suitable for individuals with aphasia.
This dissertation presents a systematic approach to implement and test verbally- stated theories of comprehension deficits in aphasia within cognitive models of sen- tence processing. The conclusions drawn from this work are that (a) the original direct-access model (as implemented here) cannot account for the full pattern of data from individuals with aphasia because it cannot account for slow misinterpretations; and (b) an activation-based model of retrieval can account for sentence comprehension deficits in individuals with aphasia by assuming a delay in syntactic structure building, and noise in the processing system. The overall pattern of results support an activation-based mechanism of memory retrieval, in which a combination of processing deficits, namely slow syntax and intermittent deficiencies, cause comprehension difficulties in individuals with aphasia.
Localisation of deformation is a ubiquitous feature in continental rift dynamics and observed across drastically different time and length scales. This thesis comprises one experimental and two numerical modelling studies investigating strain localisation in (1) a ductile shear zone induced by a material heterogeneity and (2) in an active continental rift setting. The studies are related by the fact that the weakening mechanisms on the crystallographic and grain size scale enable bulk rock weakening, which fundamentally enables the formation of shear zones, continental rifts and hence plate tectonics. Aiming to investigate the controlling mechanisms on initiation and evolution of a shear zone, the torsion experiments of the experimental study were conducted in a Patterson type apparatus with strong Carrara marble cylinders with a weak, planar Solnhofen limestone inclusion. Using state-of-the-art numerical modelling software, the torsion experiments were simulated to answer questions regarding localisation procedure like stress distribution or the impact of rheological weakening. 2D numerical models were also employed to integrate geophysical and geological data to explain characteristic tectonic evolution of the Southern and Central Kenya Rift. Key elements of the numerical tools are a randomized initial strain distribution and the usage of strain softening. During the torsion experiments, deformation begins to localise at the limestone inclusion tips in a process zone, which propagates into the marble matrix with increasing deformation until a ductile shear zone is established. Minor indicators for coexisting brittle deformation are found close to the inclusion tip and presumed to slightly facilitate strain localisation besides the dominant ductile deformation processes. The 2D numerical model of the torsion experiment successfully predicts local stress concentration and strain rate amplification ahead of the inclusion in first order agreement with the experimental results. A simple linear parametrization of strain weaking enables high accuracy reproduction of phenomenological aspects of the observed weakening. The torsion experiments suggest that loading conditions do not affect strain localisation during high temperature deformation of multiphase material with high viscosity contrasts. A numerical simulation can provide a way of analysing the process zone evolution virtually and extend the examinable frame. Furthermore, the nested structure and anastomosing shape of an ultramylonite band was mimicked with an additional second softening step. Rheological weakening is necessary to establish a shear zone in a strong matrix around a weak inclusion and for ultramylonite formation.
Such strain weakening laws are also incorporated into the numerical models of the
Southern and Central Kenya Rift that capture the characteristic tectonic evolution. A three-stage early rift evolution is suggested that starts with (1) the accommodation of strain by a single border fault and flexure of the hanging-wall crust, after which (2) faulting in the hanging-wall and the basin centre increases before (3) the early-stage asymmetry is lost and basinward localisation of deformation occurs. Along-strike variability of rifts can be produced by modifying the initial random noise distribution. In summary, the three studies address selected aspects of the broad range of mechanisms and processes that fundamentally enable the deformation of rock and govern the localisation patterns across the scales. In addition to the aforementioned results, the first and second manuscripts combined, demonstrate a procedure to find new or improve on existing numerical formulations for specific rheologies and their dynamic weakening. These formulations are essential in addressing rock deformation from the grain to the global scale. As within the third study of this thesis, where geodynamic controls on the evolution of a rift were examined and acquired by the integration of geological and geophysical data into a numerical model.
This thesis is analyzing multiple coordination challenges which arise with the digital transformation of public administration in federal systems, illustrated by four case studies in Germany. I make various observations within a multi-level system and provide an in-depth analysis. Theoretical explanations from both federalism research and neo-institutionalism are utilized to explain the findings of the empirical driven work. The four articles evince a holistic picture of the German case and elucidate its role as a digital government laggard. Their foci range from macro, over meso to micro level of public administration, differentiating between the governance and the tool dimension of digital government.
The first article shows how multi-level negotiations lead to expensive but eventually satisfying solutions for the involved actors, creating a subtle balance between centralization and decentralization. The second article identifies legal, technical, and organizational barriers for cross-organizational service provision, highlighting the importance of inter-organizational and inter-disciplinary exchange and both a common language and trust. Institutional change and its effects on the micro level, on citizens and the employees in local one-stop shops, mark the focus of the third article, bridging the gap between reforms and the administrative reality on the local level. The fourth article looks at the citizens’ perspective on digital government reforms, their expectations, use and satisfaction. In this vein, this thesis provides a detailed account of the importance of understanding the digital divide and therefore the necessity of reaching out to different recipients of digital government reforms. I draw conclusions from the factors identified as causes for Germany’s shortcomings for other federal systems where feasible and derive reform potential therefrom. This allows to gain a new perspective on digital government and its coordination challenges in federal contexts.
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
Cosmic rays (CRs) are a ubiquitous and an important component of astrophysical environments such as the interstellar medium (ISM) and intracluster medium (ICM). Their plasma physical interactions with electromagnetic fields strongly influence their transport properties. Effective models which incorporate the microphysics of CR transport are needed to study the effects of CRs on their surrounding macrophysical media. Developing such models is challenging because of the conceptional, length-scale, and time-scale separation between the microscales of plasma physics and the macroscales of the environment. Hydrodynamical theories of CR transport achieve this by capturing the evolution of CR population in terms of statistical moments. In the well-established one-moment hydrodynamical model for CR transport, the dynamics of the entire CR population are described by a single statistical quantity such as the commonly used CR energy density. In this work, I develop a new hydrodynamical two-moment theory for CR transport that expands the well-established hydrodynamical model by including the CR energy flux as a second independent hydrodynamical quantity. I detail how this model accounts for the interaction between CRs and gyroresonant Alfvén waves. The small-scale magnetic fields associated with these Alfvén waves scatter CRs which fundamentally alters CR transport along large-scale magnetic field lines. This leads to the effects of CR streaming and diffusion which are both captured within the presented hydrodynamical theory. I use an Eddington-like approximation to close the hydrodynamical equations and investigate the accuracy of this closure-relation by comparing it to high-order approximations of CR transport. In addition, I develop a finite-volume scheme for the new hydrodynamical model and adapt it to the moving-mesh code Arepo. This scheme is applied using a simulation of a CR-driven galactic wind. I investigate how CRs launch the wind and perform a statistical analysis of CR transport properties inside the simulated circumgalactic medium (CGM). I show that the new hydrodynamical model can be used to explain the morphological appearance of a particular type of radio filamentary structures found inside the central molecular zone (CMZ). I argue that these harp-like features are synchrotron-radiating CRs which are injected into braided magnetic field lines by a point-like source such as a stellar wind of a massive star or a pulsar. Lastly, I present the finite-volume code Blinc that uses adaptive mesh refinement (AMR) techniques to perform simulations of radiation and magnetohydrodynamics (MHD). The mesh of Blinc is block-structured and represented in computer memory using a graph-based approach. I describe the implementation of the mesh graph and how a diffusion process is employed to achieve load balancing in parallel computing environments. Various test problems are used to verify the accuracy and robustness of the employed numerical algorithms.
Die vorliegende Arbeit vertritt die These, dass Hegels Wissenschaft der Logik mit einer Konzeption von Absolutheit Ernst zu machen versucht, nach der es kein Außerhalb des Absoluten geben kann. Dies macht sich bereits im Anfang der Logik bemerkbar: Wenn es nichts außerhalb des Absoluten geben kann, dann darf auch der Anfang nicht außerhalb des Absoluten sein. Folglich kann der Anfang nur mit dem Absoluten gemacht werden. Das Setzen des Anfangs als absolut ist aber gleichzeitig ein Testen des Anfangs auf seine Absolutheit. Diese Prüfung kann der Anfang nicht bestehen. Denn es liegt im Wesen eines Anfangs, nur Anfang und nicht das Ganze und somit nicht das Absolute zu sein. Der Anfang ist am weitesten davon entfernt, das Ganze zu sein, und muss folglich als das Nicht-Absoluteste innerhalb der Logik betrachtet werden. Also ist er beides: Er ist ein Anfang mit dem Absoluten und er ist ein Anfang mit dem Nicht-Absolutesten. Die Logik widerspricht sich bereits in ihrem Anfang. Von diesem Widerspruch muss sie sich befreien. Diese Befreiung treibt den Gang vom Anfang fort. Dies erzeugt den Fortgang der Logik. Die anfängliche Bestimmung hebt sich auf und geht in ihre Folgebestimmung über. Die Folgebestimmung wird ihrerseits absolut gesetzt, kann dieser Setzung aber ebenfalls nicht gerecht werden und hebt sich in ihre Folgebestimmung auf. Eine jede Bestimmung, die auf den Anfang folgt, durchläuft diese Bewegung des Absolutsetzens, Daran-Scheiterns und Sich-Aufhebens, bis – ganz am Ende der Logik – ebendiese Bewegung als dasjenige erkannt wird, was allein vermögend ist, dem Anspruch auf Absolutheit zu genügen. Denn wenn eine jede Bestimmung dieser Bewegung unterworfen ist, dann gibt es kein Außerhalb zu dieser Bewegung. Und also muss sie das gesuchte Absolute sein.
Auf ihrem Weg hin zur wahren Bedeutung des Absoluten kehrt die Logik immer wieder in die Bestimmung ihres Anfangs zurück, um Voraussetzungen einzuholen, die in Zusammenhang mit ihrem Anfang gemacht werden mussten. Für das Einholen dieser Voraussetzungen werden folgende Textstellen von Interesse sein: der Übergang in die Wesenslogik, der Übergang in die Begriffslogik und das Schlusskapitel. Denn auch zuallerletzt, in ihrem Ende kehrt die Logik in ihren Anfang zurück. Entsprechend kann mit Hegel gesagt werden: Das Erste ist auch das Letzte und das Letzte ist auch das Erste.
Stellar interferometry is the only method in observational astronomy for obtaining the highest resolution images of astronomical targets. This method is based on combining light from two or more separate telescopes to obtain the complex visibility that contains information about the brightness distribution of an astronomical source. The applications of stellar interferometry have made significant contributions in the exciting research areas of astronomy and astrophysics, including the precise measurement of stellar diameters, imaging of stellar surfaces, observations of circumstellar disks around young stellar objects, predictions of Einstein's General relativity at the galactic center, and the direct search for exoplanets to name a few. One important related technique is aperture masking interferometry, pioneered in the 1960s, which uses a mask with holes at the re-imaged pupil of the telescope, where the light from the holes is combined using the principle of stellar interferometry. While this can increase the resolution, it comes with a disadvantage. Due to the finite size of the holes, the majority of the starlight (typically > 80 %) is lost at the mask, thus limiting the signal-to-noise ratio (SNR) of the output images. This restriction of aperture masking only to the bright targets can be avoided using pupil remapping interferometry - a technique combining aperture masking interferometry and advances in photonic technologies using single-mode fibers. Due to the inherent spatial filtering properties, the single-mode fibers can be placed at the focal plane of the re-imaged pupil, allowing the utilization of the whole pupil of the telescope to produce a high-dynamic range along with high-resolution images. Thus, pupil remapping interferometry is one of the most promising application areas in the emerging field of astrophotonics.
At the heart of an interferometric facility, a beam combiner exists whose primary function is to combine light to obtain high-contrast fringes. A beam combiner can be as simple as a beam splitter or an anamorphic lens to combine light from 2 apertures (or telescopes) or as complex as a cascade of beam splitters and lenses to combine light for > 2 apertures. However, with the field of astrophotonics, interferometric facilities across the globe are increasingly employing some form of photonics technologies by using single-mode fibers or integrated optics (IO) chips as an efficient way to combine light from several apertures. The state-of-the-art instrument - GRAVITY at the very large telescope interferometer (VLTI) facility uses an IO-based beam combiner device reaching visibilities accuracy of better than < 0.25 %, which is roughly 50× as precise as a few decades back.
Therefore, in the context of IO-based components for applications in stellar interferometry, this Thesis describes the work towards the development of a 3-dimensional (3-D) IO device - a monolithic astrophotonics component containing both the pupil remappers and a discrete beam combiner (DBC). In this work, the pupil remappers are 3-D single-mode waveguides in a glass substrate collecting light from the re-imaged pupil of the telescope and feeding the light to a DBC, where the combination takes place. The DBC is a lattice of 3-D single-mode waveguides, which interact through evanescent coupling. By observing the output power of single-mode waveguides of the DBC, the visibilities are retrieved by using a calibrated transfer matrix ({U}) of the device.
The feasibility of the DBC in retrieving the visibilities theoretically and experimentally had already been studied in the literature but was only limited to laboratory tests with monochromatic light sources. Thus, a part of this work extends these studies by investigating the response of a 4-input DBC to a broad-band light source. Hence, the objectives of this Thesis are the following: 1) Design an IO device for broad-band light operation such that accurate and precise visibilities could be retrieved experimentally at astronomical H-band (1.5-1.65 μm), and 2) Validation of the DBC as a possible beam combination scheme for future interferometric facilities through on-sky testing at the William Herschel Telescope (WHT).
This work consisted of designing three different 3-D IO devices. One of the popular methods for fabricating 3-D photonic components in a glass substrate is ultra-fast laser inscription (ULI). Thus, manufacturing of the designed devices was outsourced to Politecnico di Milano as part of an iterative fabrication process using their state-of-the-art ULI facility. The devices were then characterized using a 2-beam Michelson interferometric setup obtaining both the monochromatic and polychromatic visibilities. The retrieved visibilities for all devices were in good agreement as predicted by the simulation results of a DBC, which confirms both the repeatability of the ULI process and the stability of the Michelson setup, thus fulfilling the first objective.
The best-performing device was then selected for the pupil-remapping of the WHT using a different optical setup consisting of a deformable mirror and a microlens array. The device successfully collected stellar photons from Vega and Altair. The visibilities were retrieved using a previously calibrated {U} but showed significant deviations from the expected results. Based on the analysis of comparable simulations, it was found that such deviations were primarily caused by the limited SNR of the stellar observations, thus constituting a first step towards the fulfillment of the second objective.
Fiber-based microfluidics has undergone many innovative developments in recent years, with exciting examples of portable, cost-effective and easy-to-use detection systems already being used in diagnostic and analytical applications. In water samples, Legionella are a serious risk as human pathogens. Infection occurs through inhalation of aerosols containing Legionella cells and can cause severe pneumonia and may even be fatal. In case of Legionella contamination of water-bearing systems or Legionella infection, it is essential to find the source of the contamination as quickly as possible to prevent further infections. In drinking, industrial and wastewater monitoring, the culture-based method is still the most commonly used technique to detect Legionella contamination. In order to improve the laboratory-dependent determination, the long analysis times of 10-14 days as well as the inaccuracy of the measured values in colony forming units (CFU), new innovative ideas are needed. In all areas of application, for example in public, commercial or private facilities, rapid and precise analysis is required, ideally on site.
In this PhD thesis, all necessary single steps for a rapid DNA-based detection of Legionella were developed and characterized on a fiber-based miniaturized platform. In the first step, a fast, simple and device-independent chemical lysis of the bacteria and extraction of genomic DNA was established. Subsequently, different materials were investigated with respect to their non-specific DNA retention. Glass fiber filters proved to be particularly suitable, as they allow recovery of the DNA sample from the fiber material in combination with dedicated buffers and exhibit low autofluorescence, which was important for fluorescence-based readout.
A fiber-based electrophoresis unit was developed to migrate different oligonucleotides within a fiber matrix by application of an electric field. A particular advantage over lateral flow assays is the targeted movement, even after the fiber is saturated with liquid. For this purpose, the entire process of fiber selection, fiber chip patterning, combination with printed electrodes, and testing of retention and migration of different DNA samples (single-stranded, double-stranded and genomic DNA) was performed. DNA could be pulled across the fiber chip in an electric field of 24 V/cm within 5 minutes, remained intact and could be used for subsequent detection assays e.g., polymerase chain reaction (PCR) or fluorescence in situ hybridization (FISH). Fiber electrophoresis could also be used to separate DNA from other components e.g., proteins or cell lysates or to pull DNA through multiple layers of the glass microfiber. In this way, different fragments experienced a moderate, size-dependent separation. Furthermore, this arrangement offers the possibility that different detection reactions could take place in different layers at a later time. Electric current and potential measurements were collected to investigate the local distribution of the sample during migration. While an increase in current signal at high concentrations indicated the presence of DNA samples, initial experiments with methylene blue stained DNA showed a temporal sequence of signals, indicating sample migration along the chip.
For the specific detection of a Legionella DNA, a FISH-based detection with a molecular beacon probe was tested on the glass microfiber. A specific region within the 16S rRNA gene of Legionella spp. served as a target. For this detection, suitable reaction conditions and a readout unit had to be set up first. Subsequently, the sensitivity of the probe was tested with the reverse complementary target sequence and the specificity with several DNA fragments that differed from the target sequence. Compared to other DNA sequences of similar length also found in Legionella pneumophila, only the target DNA was specifically detected on the glass microfiber. If a single base exchange is present or if two bases are changed, the probe can no longer distinguish between the DNA targets and non-targets. An analysis with this specificity can be achieved with other methods such as melting point determination, as was also briefly indicated here. The molecular beacon probe could be dried on the glass microfiber and stored at room temperature for more than three months, after which it was still capable of detecting the target sequence. Finally, the feasibility of fiber-based FISH detection for genomic Legionella DNA was tested. Without further processing, the probe was unable to detect its target sequence in the complex genomic DNA. However, after selecting and application of appropriate restriction enzymes, specific detection of Legionella DNA against other aquatic pathogens with similar fragment patterns as Acinetobacter haemolyticus was possible.
Die vorliegende Dissertation verfolgt das Ziel, die diagnostischen Möglichkeiten für das Stö-rungsbild der erworbenen Dyslexie bei deutschsprachigen Personen mit Dyslexie (PmD) zu erweitern und zu spezifizieren.
In der Literatur werden verschiedene Sprachverarbeitungsmodelle diskutiert, die den kognitiven Prozess der Schriftsprachverarbeitung zu erklären versuchen. Alle Überlegungen, Erhebungen und Analysen dieser Dissertation fußen auf den theoretischen Annahmen des kognitiven Zwei-Routen-Lesemodells, welches zwischen lexikalisch-semantischer und segmentaler, sub-lexikalischer Verarbeitung beim Lesen unterscheidet und so die voneinander unabhängigen Fähigkeiten zum Lesen bekannter und unbekannter Wörter abbilden kann. Mit dem im Rahmen der Dissertation entwickelten, kognitiv orientierten Diagnostikverfahren DYMO (Dyslexie Mo-dellorientiert) soll durch die Erhebung der Lesefähigkeiten von PmD eine möglichst genaue modelltheoretische Verortung der Lesebeeinträchtigung erreicht und eine Grundlage für die Planung einer lesebezogenen Therapie geschaffen werden. Dabei werden auch Modellkomponenten des Zwei-Routen-Lesemodells berücksichtigt, die bisher im deutschsprachigen Raum noch nicht etabliert sind. Dazu zählen Unterkomponenten der Visuellen Analyse, die für die Identifikation von Buchstaben und das Kodieren von Buchstabenpositionen verantwortlich sind und Unterkomponenten der segmentalen Leseroute, die den einzelheitlichen Leseprozess auf dieser Modellroute schrittweise abbilden. Das Itemmaterial aus DYMO ist nach diversen psycholinguistisch kontrollierten Variablen kontrolliert. Hierbei werden auch Variablen berücksichtigt, die bisher in der Dyslexiediagnostik für deutschsprachige PmD nicht systematisch erfasst werden können, wie die Wortlänge und die graphematische Komplexität von Pseudowörtern.
Die erste dieser Dissertation zugrundeliegende Publikation (Originalarbeit I) befasst sich mit den Parametern und Modellkomponenten, die für eine umfassende modelltheoretisch basierte Di-agnostik bei erworbener Dyslexie entscheidend sind. Es werden außerdem Überlegungen zu Fehlertypen-Kategorisierung angestellt.
Die zweite Publikation (Originalarbeit II) stellt das Testverfahren DYMO dar. Das dazugehörige Handbuch liefert detaillierte Informationen zum Aufbau und der Konstruktion des Testverfah-rens, zur Durchführung und Auswertung der einzelnen Untertests und zur Einstufung einer Leistung in einen Leistungsbereich. Anhand von ausführlich beschriebenen Fallbeispielen zweier PmD werden die Durchführung, Auswertung, Interpretation und das Ableiten von Therapiezielen dargestellt. Die Ergebnisse dieser Fallbeschreibungen verdeutlichen die diagnostische Ergänzung durch DYMO und zeigen, dass das explizite Untersuchen der Unterkomponenten der Visuellen Analyse und der segmentalen Leseroute sowie der Einbezug der Variablen Wortlänge und gra-phematische Komplexität den Lesebefund spezifizieren und den Therapieeinstieg konkretisieren können.
Die dritte Publikation (Originalarbeit III) zeigt in einer systematischen Vergleichsstudie anhand einer Fallserie von zwölf PmD die Unterschiede zwischen dem Diagnostikverfahren DYMO und einem weiteren kognitiv basierten Diagnostikverfahren. Es wird diskutiert, inwieweit DYMO eine sinnvolle Ergänzung im Diagnostikprozess erworbener Dyslexien darstellen kann. Außerdem werden leicht und schwer beeinträchtigte PmD in Gruppenanalysen verglichen, um zu prüfen, ob DYMO insbesondere bei leicht beeinträchtigten PmD eine Ergänzung bieten kann. Aufgrund des komplexeren Itemmaterials von DYMO (beispielsweise aufgrund der Kontrolle der Wortlänge) wurde angenommen, dass leicht beeinträchtigte PmD in DYMO-Untertests auffälligere Leseleistungen zeigen als in Aufgaben des gegenübergestellten anderen Diagnostikverfahrens. Diese Hypothese konnte teilweise bestätigt werden. Leicht beeinträchtigte PmD zeigten häufiger Längeneffekte als schwer beeinträchtigte PmD. Insgesamt fiel der Gruppenunterschied jedoch nicht so deutlich aus, wie erwartet.
Mit dem kriteriumsorientiert normierten und finalisierten Material von DYMO wurden 17 PmD getestet. Ausführliche Befunde für jede einzelne PmD mit darauffolgenden Therapieimplikationen zeigen, dass insbesondere die Spezifizierung eines segmentalen Lesedefizits bei einer schwer beeinträchtigten Leistung im Lesen von Pseudowörtern zur erweiterten Aussage bezüglich des modelltheoretischen Störungsortes beitragen kann. Dies verdeutlicht die hohe Aussagekraft der DYMO-Untertests und die Relevanz einer spezifischen und detaillierten modellbasierten Befunderhebung für eine explizite, individuelle Therapieplanung bei erworbenen Dyslexien.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
It is estimated that data scientists spend up to 80% of the time exploring, cleaning, and transforming their data. A major reason for that expenditure is the lack of knowledge about the used data, which are often from different sources and have heterogeneous structures. As a means to describe various properties of data, metadata can help data scientists understand and prepare their data, saving time for innovative and valuable data analytics. However, metadata do not always exist: some data file formats are not capable of storing them; metadata were deleted for privacy concerns; legacy data may have been produced by systems that were not designed to store and handle meta- data. As data are being produced at an unprecedentedly fast pace and stored in diverse formats, manually creating metadata is not only impractical but also error-prone, demanding automatic approaches for metadata detection.
In this thesis, we are focused on detecting metadata in CSV files – a type of plain-text file that, similar to spreadsheets, may contain different types of content at arbitrary positions. We propose a taxonomy of metadata in CSV files and specifically address the discovery of three different metadata: line and cell type, aggregations, and primary keys and foreign keys.
Data are organized in an ad-hoc manner in CSV files, and do not follow a fixed structure, which is assumed by common data processing tools. Detecting the structure of such files is a prerequisite of extracting information from them, which can be addressed by detecting the semantic type, such as header, data, derived, or footnote, of each line or each cell. We propose the supervised- learning approach Strudel to detect the type of lines and cells. CSV files may also include aggregations. An aggregation represents the arithmetic relationship between a numeric cell and a set of other numeric cells. Our proposed AggreCol algorithm is capable of detecting aggregations of five arithmetic functions in CSV files. Note that stylistic features, such as font style and cell background color, do not exist in CSV files. Our proposed algorithms address the respective problems by using only content, contextual, and computational features.
Storing a relational table is also a common usage of CSV files. Primary keys and foreign keys are important metadata for relational databases, which are usually not present for database instances dumped as plain-text files. We propose the HoPF algorithm to holistically detect both constraints in relational databases. Our approach is capable of distinguishing true primary and foreign keys from a great amount of spurious unique column combinations and inclusion dependencies, which can be detected by state-of-the-art data profiling algorithms.
Infectious diseases are an increasing threat to biodiversity and human health. Therefore, developing a general understanding of the drivers shaping host-pathogen dynamics is of key importance in both ecological and epidemiological research. Disease dynamics are driven by a variety of interacting processes such as individual host behaviour, spatiotemporal resource availability or pathogen traits like virulence and transmission. External drivers such as global change may modify the system conditions and, thus, the disease dynamics. Despite their importance, many of these drivers are often simplified and aggregated in epidemiological models and the interactions among multiple drivers are neglected.
In my thesis, I investigate disease dynamics using a mechanistic approach that includes both bottom-up effects - from landscape dynamics to individual movement behaviour - as well as top-down effects - from pathogen virulence on host density and contact rates. To this end, I extended an established spatially explicit individual-based model that simulates epidemiological and ecological processes stochastically, to incorporate a dynamic resource landscape that can be shifted away from the timing of host population-dynamics (chapter 2). I also added the evolution of pathogen virulence along a theoretical virulence-transmission trade-off (chapter 3). In chapter 2, I focus on bottom-up effects, specifically how a temporal shift of resource availability away from the timing of biological events of host-species - as expected under global change - scales up to host-pathogen interactions and disease dynamics. My results show that the formation of temporary disease hotspots in combination with directed individual movement acted as key drivers for pathogen persistence even under highly unfavourable conditions for the host. Even with drivers like global change further increasing the likelihood of unfavourable interactions between host species and their environment, pathogens can continue to persist with heir hosts. In chapter 3, I demonstrate that the top-down effect caused by pathogen-associated mortality on its host population can be mitigated by selection for lower virulent pathogen strains when host densities are reduced through mismatches between seasonal resource availability and host life-history events. I chapter 4, I combined parts of both theoretical models into a new model that includes individual host movement decisions and the evolution of pathogenic virulence to simulate pathogen outbreaks in realistic landscapes. I was able to match simulated patterns of pathogen spread to observed patterns from long-term outbreak data of classical swine fever in wild boar in Northern Germany. The observed disease course was best explained by a simulated high virulent strain, whereas sampling schemes and vaccination campaigns could explain differences in the age-distribution of infected hosts. My model helps to understand and disentangle how the combination of individual decision making and evolution of virulence can act as important drivers of pathogen spread and persistence.
As I show across the chapters of this thesis, the interplay of both bottom-up and top-down processes is a key driver of disease dynamics in spatially structured host populations, as they ultimately shape host densities and contact rates among moving individuals. My findings are an important step towards a paradigm shift in disease ecology away from simplified assumptions towards the inclusion of mechanisms, such as complex multi-trophic interactions, and their feedbacks on pathogen spread and disease persistence. The mechanisms presented here should be at the core of realistic predictive and preventive epidemiological models.
The deciduous needle tree larch (Larix Mill.) covers more than 80% of the Asian boreal forests. Only a few Larix species constitute the vast forests and these species differ markedly in their ecological traits, most importantly in their ability to grow on and stabilize underlying permafrost. The pronounced dominance of the summergreen larches makes the Asian boreal forests unique, as the rest of the northern hemisphere boreal forests is almost exclusively dominated by evergreen needle-leaf forests. Global warming is impacting the whole world but is especially pronounced in the arctic and boreal regions. Although adapted to extreme climatic conditions, larch forests are sensitive to varying climatic conditions. By their sheer size, changes in Asian larch forests as range shifts or changes in species composition and the resulting vegetation-climate feedbacks are of global relevance. It is however still uncertain if larch forests will persist under the ongoing warming climate or if they will be replaced by evergreen forests. It is therefore of great importance to understand how these ecosystems will react to future climate warmings and if they will maintain their dominance. One step in the better understanding of larch dynamics is to study how the vast dominant forests developed and why they only established in northern Asia. A second step is to study how the species reacted to past changes in the climate.
The first objective of this thesis was to review and identify factors promoting Asian larch dominance. I achieved this by synthesizing and comparing reported larch occurrences and influencing components on the northern hemisphere continents in the present and in the past. The second objective was to find a possibility to directly study past Larix populations in Siberia and specifically their genetic variation, enabling the study of geographic movements. For this, I established chloroplast enrichment by hybridization capture from sedimentary ancient DNA (sedaDNA) isolated from lake sediment records. The third objective was to use the established method to track past larch populations, their glacial refugia during the Last Glacial Maximum (LGM) around 21,000 years before present (ka BP), and their post-glacial migration patterns.
To study larch promoting factors, I compared the present state of larch species ranges, areas of dominance, their bioclimatic niches, and the distribution on different extents and thaw depths of permafrost. The species comparison showed that the bioclimatic niches greatly overlap between the American and Asian species and that it is only in the extremely continental climates in which only the Asian larch species can persist. I revealed that the area of dominance is strongly connected to permafrost extent but less linked to permafrost seasonal thaw depths. Comparisons of the paleorecord of larch between the continents suggest differences in the recolonization history. Outside of northern Asia and Alaska, glacial refugial populations of larch were confined to the southern regions and thus recolonization could only occur as migration from south to north. Alaskan larch populations could not establish wide-range dominant forest which could be related to their own genetically depletion as separated refugial population. In Asia, it is still unclear whether or not the northern refugial populations contributed and enhanced the postglacial colonization or whether they were replaced by populations invading from the south in the course of climate warming. Asian larch dominance is thus promoted partly by adaptions to extremely continental climates and by adaptations to grow on continuous permafrost but could be also connected to differences in glacial survival and recolonization history of Larix species.
Except for extremely rare macrofossil findings of fossilized cones, traditional methods to study past vegetation are not able to distinguish between larch species or populations. Within the scope of this thesis, I therefore established a method to retrieve genetic information of past larch populations to distinguish between species. Using the Larix chloroplast genome as target, I successfully applied the method of DNA target enrichment by hybridization capture on sedaDNA samples from lake records and showed that it is able to distinguish between larch species. I then used the method on samples from lake records from across Siberia dating back up to 50 ka BP. The results allowed me to address the question of glacial survival and post-glacial recolonization mode in Siberian larch species. The analyzed pattern showed that LGM refugia were almost exclusively constituted by L. gmelinii, even in sites of current L. sibirica distribution. For included study sites, L. sibirica migrated into its extant northern distribution area only in the Holocene. Consequently, the post-glacial recolonization of L. sibirica was not enhanced by northern glacial refugia. In case of sites in extant distribution area of L. gmelinii, the absence of a genetic turn-over point to a continuous population rather than an invasion of southern refugia. The results suggest that climate has a strong influence on the distribution of Larix species and that species may also respond differently to future climate warming. Because species differ in their ecological characteristics, species distribution is also relevant with respect to further feedbacks between vegetation and climate.
With this thesis, I give an overview of present and past larch occurrences and evaluate which factors promote their dominance. Furthermore, I provide the tools to study past Larix species and give first important insights into the glacial history of Larix populations.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
Objective: The behaviors of endothelial cells or mesenchymal stem cells are remarkably influenced by the mechanical properties of their surrounding microenvironments. Here, electrospun fiber meshes containing various mechanical characteristics were developed from polyetheresterurethane (PEEU) copolymers. The goal of this study was to explore how fiber mesh stiffness affected endothelial cell shape, growth, migration, and angiogenic potential of endothelial cells. Furthermore, the effects of the E-modulus of fiber meshes on human adipose-derived stem cells (hADSCs) osteogenic potential was investigated.
Methods: Polyesteretherurethane (PEEU) polymers with various poly(p-dioxanone) (PPDO) to poly (ε-caprolactone) (PCL) weight percentages (40 wt.%, 50 wt.%, 60 wt.%, and 70 wt.%) were synthesized, termed PEEU40, PEEU50, PEEU60, and PEEU70, accordingly. The electrospinning method was used for the preparation of PEEU fiber meshes. The effects of PEEU fiber meshes with varying elasticities on the human umbilical vein endothelial cells (HUVECs) shape, growth, migration and angiogenic potential were characterized. To determine how the E-modulus of fiber meshes affects the osteogenic potential of hADSCs, the cellular and nuclear morphologies and osteogenic differentiation abilities were evaluated.
Results: With the increasing stiffness of PEEU fiber meshes, the aspect ratios of HUVECs cultivated on PEEU materials increased. HUVECs cultivated on high stiffness fiber meshes (4.5 ± 0.8 MPa) displayed a considerably greater proliferation rate and migratory velocity, in addition demonstrating increased tube formation capability, compared with those of the cells cultivated on lower stiffness fiber meshes (2.6 ± 0.8 MPa). Furthermore, in comparison to those cultivated on lower stiffness fiber meshes, hADSCs adhered to the highest stiffness fiber meshes PEEU70 had an elongated shape. The hADSCs grown on the softer PEEU40 fiber meshes showed a reduced nuclear aspect ratio (width to height) than those cultivated on the stiffer fiber meshes. Culturing hADSCs on stiffer fibers improved their osteogenic differentiation potential. Compared with cells cultured on PEEU40, osteocalcin expression and alkaline phosphatase (ALP) activity increased by 73 ± 10% and 43 ± 16%, respectively, in cells cultured on PEEU70.
Conclusion: The mechanical characteristics of the substrate are crucial in the modulation of cell behaviors. These findings indicate that adjusting the elasticity of fiber meshes might be a useful method for controlling the blood vessels development and regeneration. Furthermore, the mechanical characteristics of PEEU fiber meshes might be modified to control the osteogenic potential of hADSCs.
Die aktuelle COVID-19-Pandemie zeigt deutlich, wie sich Infektionskrankheiten weltweit verbreiten können. Neben Viruserkrankungen breiten sich auch multiresistente bakterielle Erreger weltweit aus. Dementsprechend besteht ein hoher Bedarf, durch frühzeitige Erkennung Erkrankte zu finden und Infektionswege zu unterbrechen.
Herkömmliche kulturelle Verfahren benötigen minimalinvasive bzw. invasive Proben und dauern für Screeningmaßnahmen zu lange. Deshalb werden schnelle, nichtinvasive Verfahren benötigt.
Im klassischen Griechenland verließen sich die Ärzte unter anderem auf ihren Geruchssinn, um Infektionen und andere Krankheiten zu differenzieren. Diese charakteristischen Gerüche sind flüchtige organische Substanzen (VOC), die im Rahmen des Metabolismus eines Organismus entstehen. Tiere, die einen besseren Geruchssinn haben, werden trainiert, bestimmte Krankheitserreger am Geruch zu unterscheiden. Allerdings ist der Einsatz von Tieren im klinischen Alltag nicht praktikabel. Es bietet sich an, auf technischem Weg diese VOCs zu analysieren.
Ein technisches Verfahren, diese VOCs zu unterscheiden, ist die Ionenmobilitätsspektrometrie gekoppelt mit einer multikapillaren Gaschromatographiesäule (MCC-IMS). Hier zeigte sich, dass es sich bei dem Verfahren um eine schnelle, sensitive und verlässliche Methode handelt.
Es ist bekannt, dass verschiedene Bakterien aufgrund des Metabolismus unterschiedliche VOCs und damit eigene spezifische Gerüche produzieren. Im ersten Schritt dieser Arbeit konnte gezeigt werden, dass die verschiedenen Bakterien in-vitro nach einer kurzen Inkubationszeitzeit von 90 Minuten anhand der VOCs differenziert werden können. Hier konnte analog zur Diagnose in biochemischen Testreihen eine hierarchische Klassifikation der Bakterien erfolgen.
Im Gegensatz zu Bakterien haben Viren keinen eigenen Stoffwechsel. Ob virusinfizierte Zellen andere VOCs als nicht-infizierte Zellen freisetzen, wurde an Zellkulturen überprüft. Hier konnte gezeigt werden, dass sich die Fingerprints der VOCs in Zellkulturen infizierter Zellen mit Respiratorischen Synzytial-Viren (RSV) von nicht-infizierten Zellen unterscheiden.
Virusinfektionen im intakten Organismus unterscheiden sich von den Zellkulturen dadurch, dass hier neben Veränderungen im Zellstoffwechsel auch durch Abwehrmechanismen VOCs freigesetzt werden können.
Zur Überprüfung, inwiefern sich Infektionen im intakten Organismus ebenfalls anhand VOCs unterscheiden lassen, wurde bei Patienten mit und ohne Nachweis einer Influenza A Infektion als auch bei Patienten mit Verdacht auf SARS-CoV-2 (Schweres-akutes-Atemwegssyndrom-Coronavirus Typ 2) Infektion die Atemluft untersucht. Sowohl Influenza-infizierte als auch SARS-CoV-2 infizierte Patienten konnten untereinander und von nicht-infizierten Patienten mittels MCC-IMS Analyse der Atemluft unterschieden werden.
Zusammenfassend erbringt die MCC-IMS ermutigende Resultate in der schnellen nichtinvasiven Erkennung von Infektionen sowohl in vitro als auch in vivo.
Das Ziel dieser Arbeit ist die Entwicklung eines Industrie 4.0 Reifegradindex für produzierende Unternehmen (KMU und Mittelstand) mit diskreter Produktion. Die Motivation zu dieser Arbeit entstand aus dem Zögern vieler Unternehmen – insbesondere KMU und Mittelstand – bei der Transformation in Richtung Industrie 4.0. Im Rahmen einer Marktstudie konnte belegt werden, dass 86 Prozent der befragten produzierenden Unternehmen kein für ihr Unternehmen geeignetes Industrie 4.0 Reifegradmodell gefunden haben, mit dem sie ihren Status Quo bewerten und Maßnahmen für einen höheren Grad der Reife ableiten könnten. Die Bewertung bestehender Reifegradmodelle zeigte Defizite hinsichtlich der Industrie 4.0 Abdeckung, der Betrachtung der sozio-technischen Dimensionen Mensch, Technik und Organisation sowie der Betrachtung von Management und Unternehmenskultur. Basierend auf den aktuellen Industrie 4.0 Technologien und Handlungsbereichen wurde ein neues, modular aufgebautes Industrie 4.0 Reifegradmodell entwickelt, das auf einer ganzheitlichen Betrachtung aller sozio-technischen Dimensionen Mensch, Technik und Organisation sowie deren Schnittstellen basiert. Das Modell ermittelt neben dem Overall Industry 4.0 Maturity Index (OI4MI) vier weitere Indizes zur Bewertung der Industrie 4.0 Reife des Unternehmens. Das Modell wurde bei einem Unternehmen validiert und steht nun als Template für darauf aufbauende Forschungsarbeiten zur Verfügung.
Antikörper werden in verschiedensten Bereichen, sowohl zu therapeutischen als auch zu diagnostischen und forschungsorientierten Zwecken verwendet. Vor der Verwendung des Antikörpers bedarf es der Charakterisierung seiner Eigenschaften, in Bezug auf sein Epitop und sein Bindeverhalten gegenüber dem Paratop. Gleichzeitig muss, in Abhängigkeit des Einsatzes, der Antikörper, für den gewünschten Gebrauch, validiert werden. Zu diesem Zweck wurden in der vorliegenden Arbeit Bead-basierte, multiplexe Testsysteme entworfen, ausgetestet und etabliert mit dem Ziel, eine einfache Screeningmethode zu entwickeln, um eine hohe Anzahl an Proben beziehungsweise Analyten gleichzeitig bestimmen zu können. Dafür wurden drei verschiedene Herangehensweisen etabliert.
So wurden ein phospho-PKA-Substrat Antikörper, welcher phosphorylierte Bindemotive der PKA der Form RRxpS erkennt, gleichzeitig mit einer Reihe an Peptide getestet, welche Punktmutationen im Vergleich zur Konsensussequenz enthielten, um den Einfluss einzelner Aminosäuren auf die Bindung des Antikörpers zu untersuchen. Es konnte im Multiplex gezeigt werden, dass die Unterschiede im Antikörperbindungsverhalten in Abhängigkeit der Aminosäure an verschiedenen P-Positionen detektierbar waren. Mit dem Bead-basierten Multiplexansatz konnten, durch Messungen von Konzentrationsreihen des Antikörpers, Bindungskinetiken aufgenommen und diese mit bereits etablierten Methoden verglichen werden.
Des Weiteren wurden verschiedene Antikörper, welche essenzielle Bestandteile von Bead-basierten Testsystemen darstellten, validiert. Es wurden dabei verschiedene Antikörper, welche spezifisch THC und CBD erkennen ausgetestet und anschließend ein kompetitiver Assay zur Detektion von THC und CBD in humanem Serum etabliert, und die Nachweisgrenzen bestimmt.
Ferner sollten Pferdeseren von Tieren, welche am Sommerekzem leiden, auf ihren IgE-Gehalt hin bestimmt werden. Dafür wurden relevante Proteine rekombinant hergestellt und durch Immobilisierung an Beads im Multiplex mit Serum inkubiert. Die spezifische Bindung des IgE an die Allergen sollte damit messbar gemacht werden können. Für die Gesamtvalidierung des Testsystems wurden zuvor sämtliche Einzelschritte einzeln validiert, um im Anschluss im multiplexen Screening zu vermessen.
Die Nutzung von Bead-basierten Multiplexmessungen als eine Plattformtechnologie erleichtert die Charakterisierung von Antikörpern sowie ihre Validierung für verschiedene Testsysteme.
Data profiling is the extraction of metadata from relational databases. An important class of metadata are multi-column dependencies. They come associated with two computational tasks. The detection problem is to decide whether a dependency of a given type and size holds in a database. The discovery problem instead asks to enumerate all valid dependencies of that type. We investigate the two problems for three types of dependencies: unique column combinations (UCCs), functional dependencies (FDs), and inclusion dependencies (INDs).
We first treat the parameterized complexity of the detection variants. We prove that the detection of UCCs and FDs, respectively, is W[2]-complete when parameterized by the size of the dependency. The detection of INDs is shown to be one of the first natural W[3]-complete problems. We further settle the enumeration complexity of the three discovery problems by presenting parsimonious equivalences with well-known enumeration problems. Namely, the discovery of UCCs is equivalent to the famous transversal hypergraph problem of enumerating the hitting sets of a hypergraph. The discovery of FDs is equivalent to the simultaneous enumeration of the hitting sets of multiple input hypergraphs. Finally, the discovery of INDs is shown to be equivalent to enumerating the satisfying assignments of antimonotone, 3-normalized Boolean formulas.
In the remainder of the thesis, we design and analyze discovery algorithms for unique column combinations. Since this is as hard as the general transversal hypergraph problem, it is an open question whether the UCCs of a database can be computed in output-polynomial time in the worst case. For the analysis, we therefore focus on instances that are structurally close to databases in practice, most notably, inputs that have small solutions. The equivalence between UCCs and hitting sets transfers the computational hardness, but also allows us to apply ideas from hypergraph theory to data profiling. We devise an discovery algorithm that runs in polynomial space on arbitrary inputs and achieves polynomial delay whenever the maximum size of any minimal UCC is bounded. Central to our approach is the extension problem for minimal hitting sets, that is, to decide for
a set of vertices whether they are contained in any minimal solution. We prove that this is yet another problem that is complete for the complexity class W[3], when parameterized by the size of the set that is to be extended. We also give several conditional lower bounds under popular hardness conjectures such as the Strong Exponential Time Hypothesis (SETH). The lower bounds suggest that the running time of our algorithm for the extension problem is close to optimal.
We further conduct an empirical analysis of our discovery algorithm on real-world databases to confirm that the hitting set perspective on data profiling has merits also in practice. We show that the resulting enumeration times undercut their theoretical worst-case bounds on practical data, and that the memory consumption of our method is much smaller than that of previous solutions. During the analysis we make two observations about the connection between databases and their corresponding hypergraphs. On the one hand, the hypergraph representations containing all relevant information are usually significantly smaller than the original inputs. On the other hand, obtaining those hypergraphs is the actual bottleneck of any practical application. The latter often takes much longer than enumerating the solutions, which is in stark contrast to the fact that the preprocessing is guaranteed to be polynomial while the enumeration may take exponential time.
To make the first observation rigorous, we introduce a maximum-entropy model for non-uniform random hypergraphs and prove that their expected number of minimal hyperedges undergoes a phase transition with respect to the total number of edges. The result also explains why larger databases may have smaller hypergraphs. Motivated by the second observation, we present a new kind of UCC discovery algorithm called Hitting Set Enumeration with Partial Information and Validation (HPIValid). It utilizes the fast enumeration times in practice in order to speed up the computation of the corresponding hypergraph. This way, we sidestep the bottleneck while maintaining the advantages of the hitting set perspective. An exhaustive empirical evaluation shows that HPIValid outperforms the current state of the art in UCC discovery. It is capable of processing databases that were previously out of reach for data profiling.
Die Arbeit gibt einen Einblick in die Verständigungspraxen bei Stadtführungen mit (ehemaligen) Obdachlosen, die in ihrem Selbstverständnis auf die Herstellung von Verständnis, Toleranz und Anerkennung für von Obdachlosigkeit betroffene Personen zielen. Zunächst wird in den Diskurs des Slumtourismus eingeführt und, angesichts der Vielfalt der damit verbundenen Erscheinungsformen, Slumming als organisierte Begegnung mit sozialer Ungleichheit definiert. Die zentralen Diskurslinien und die darin eingewobenen moralischen Positionen werden nachvollzogen und im Rahmen der eigenommenen wissenssoziologischen Perspektive als Ausdruck einer per se polykontexturalen Praxis re-interpretiert. Slumming erscheint dann als eine organisierte Begegnung von Lebensformen, die sich in einer Weise fremd sind, als dass ein unmittelbares Verstehen unwahrscheinlich erscheint und genau aus diesem Grund auf der Basis von gängigen Interpretationen des Common Sense ausgehandelt werden muss. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie sich Teilnehmer und Stadtführer über die Erfahrung der Obdachlosigkeit praktisch verständigen und welcher Art das hierüber erzeugte Verständnis für die im öffentlichen Diskurs mit vielfältigen stigmatisierenden Zuschreibungen versehenen Obdachlosen ist. Dabei interessiert besonders, in Bezug auf welche Aspekte der Erfahrung von Obdachlosigkeit ein gemeinsames Verständnis möglich wird und an welchen Stellen dieses an Grenzen gerät. Dazu wurden die Gesprächsverläufe auf neun Stadtführungen mit (ehemaligen) obdachlosen Stadtführern unterschiedlicher Anbieter im deutschsprachigen Raum verschriftlicht und mit dem Verfahren der Dokumentarischen Methode ausgewertet. Die vergleichende Betrachtung der Verständigungspraxen eröffnet nicht zuletzt eine differenzierte Perspektive auf die in den Prozessen der Verständigung immer schon eingewobenen Anerkennungspraktiken. Mit Blick auf die moralische Debatte um organisierte Begegnungen mit sozialer Ungleichheit wird dadurch eine ethische Perspektive angeregt, in deren Zentrum Fragen zur Vermittlungsarbeit stehen.
Essays in labor economics
(2022)
This thesis offers insights into the process of workers decisions to invest into work-related training. Specifically, the role of personality traits and attitudes is analysed. The aim is to understand whether such traits contribute to an under-investment into training. Importantly, general and specific training are distinguished, where the worker’s productivity increases in many firms in the former and only in the current firm in the latter case. Additionally, this thesis contributes to the evaluation of the German minimum wage introduction in 2015, identifying causal effects on wages and working hours.
Chapters two to four focus on the work-related training decision. First, individuals with an internal locus of control see a direct link between their own actions and their labor market success, while external individuals connect their outcomes to fate, luck, and other people. Consequently, it can be expected that internal individuals expect higher returns to training and are, thus, more willing to participate. The results reflect this hypothesis with internal individuals being more likely to participate in general (but not specific) training. Second, training can be viewed either as a risky investment or as an insurance against negative labor income shocks. In both cases, risk attitudes are expected to play a role in the decision process. The data point towards risk seeking individuals being more likely to participate in general (but not specific) training, and thus, training being viewed on average as a risky investment. Third, job satisfaction influences behavioral decisions in the job context, where dissatisfied workers may react by neglecting their duties, improving the situation or quitting the job. In the first case, dissatisfied workers are expected to invest less in training, while the latter two reactions could lead to higher participation rates amongst dissatisfied workers. The results suggest that on average dissatisfied workers are less likely to invest into training than satisfied workers. However, closer inspections of quit intentions and different sources of dissatisfaction paint less clear pictures, pointing towards the complexity of the job satisfaction construct.
Chapters five and six evaluate the introduction of the minimum wage in Germany in 2015. First, in 2015 an increase in the growth of hourly wages can be identified as a causal effect of the minimum wage introduction. However, at the same time, a reduction in the weekly working hours results in an overall unchanged growth in monthly earnings. When considering the effects in 2016, the decrease in weekly working hours disappears, resulting in a significant increase in the growth of monthly earnings due to the minimum wage. Importantly, the analysis suggests that the increase in hourly wages was not sufficient to ensure all workers receiving the minimum wage. This points to non-compliance being an issue in the first years after the minimum wage introduction.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
The ongoing climate change is altering the living conditions for many organisms on this planet at an unprecedented pace. Hence, it is crucial for the survival of species to adapt to these changing conditions. In this dissertation Silene vulgaris is used as a model organism to understand the adaption strategies of widely distributed plant species to the current climate change. Especially plant species that possess a wide geographic range are expected to have a high phenotypic plasticity or to show genetic differentiation in response to the different climate conditions they grow in. However, they are often underrepresented in research.
In the greenhouse experiment presented in this thesis, I examined the phenotypic responses and plasticity in S. vulgaris to estimate its’ adaptation potential. Seeds from 25 wild European populations were collected along a latitudinal gradient and grown in a greenhouse under three different precipitation (65 mm, 75 mm, 90 mm) and two different temperature regimes (18°C, 21°C) that resembled a possible climate change scenario for central Europe. Afterwards different biomass and fecundity-related plant traits were measured.
The treatments significantly influenced the plants but did not reveal a latitudinal difference in response to climate treatments for most plant traits. The number of flowers per individual however, showed a stronger plasticity in northern European populations (e.g., Swedish populations) where numbers decreased more drastically with increased temperature and decreased precipitation.
To gain an even deeper understanding of the adaptation of S. vulgaris to climate change it is also important to reveal the underlying phylogeny of the sampled populations. Therefore, I analysed their population genetic structure through whole genome sequencing via ddRAD.
The sequencing revealed three major genetic clusters in the S. vulgaris populations sampled in Europe: one cluster comprised Southern European populations, one cluster Western European populations and another cluster contained central European populations. A following analysis of experimental trait responses among the clusters to the climate-change scenario showed that the genetic clusters significantly differed in biomass-related traits and in the days to flowering. However, half of the traits showed parallel response patterns to the experimental climate-change scenario.
In addition to the potential geographic and genetic adaptation differences to climate change this dissertation also deals with the response differences between the sexes in S. vulgaris. As a gynodioecious species populations of S. vulgaris consist of female and hermaphrodite
individuals and the sexes can differ in their morphological traits which is known as sexual dimorphism. As climate change is becoming an important factor influencing plant morphology it remains unclear if and how different sexes may respond in sexually dimorphic species. To examine this question the sex of each individual plant was determined during the greenhouse experiment and the measured plant traits were analysed accordingly. In general, hermaphrodites had a higher number of flowers but a lower number of leaves than females. With regards to the climate change treatment, I found that hermaphrodites showed a milder negative response to higher temperatures in the number of flowers produced and in specific leaf area (SLA) compared to females.
Synthesis – The significant treatment response in Silene vulgaris, independent of population origin in most traits suggests a high degree of universal phenotypic plasticity. Also, the three European intraspecific genetic lineages detected showed comparable parallel response patterns in half of the traits suggesting considerable phenotypic plasticity. Hence, plasticity might represent a possible adaptation strategy of this widely distributed species during ongoing and future climatic changes. The results on sexual dimorphism show that females and hermaphrodites are differing mainly in their number of flowers and females are affected more strongly by the experimental climate-change scenario. These results provide a solid knowledge basis on the sexual dimorphism in S. vulgaris under climate change, but further research is needed to determine the long-term impact on the breeding system for the species.
In summary this dissertation provides a comprehensive insight into the adaptation mechanisms and consequences of a widely distributed and gynodioecious plant species and leverages our understanding of the impact of anthropogenic climate change on plants.
Polyglot programming allows developers to use multiple programming languages within the same software project. While it is common to use more than one language in certain programming domains, developers also apply polyglot programming for other purposes such as to re-use software written in other languages. Although established approaches to polyglot programming come with significant limitations, for example, in terms of performance and tool support, developers still use them to be able to combine languages.
Polyglot virtual machines (VMs) such as GraalVM provide a new level of polyglot programming, allowing languages to directly interact with each other. This reduces the amount of glue code needed to combine languages, results in better performance, and enables tools such as debuggers to work across languages. However, only a little research has focused on novel tools that are designed to support developers in building software with polyglot VMs. One reason is that tool-building is often an expensive activity, another one is that polyglot VMs are still a moving target as their use cases and requirements are not yet well understood.
In this thesis, we present an approach that builds on existing self-sustaining programming systems such as Squeak/Smalltalk to enable exploratory programming, a practice for exploring and gathering software requirements, and re-use their extensive tool-building capabilities in the context of polyglot VMs. Based on TruffleSqueak, our implementation for the GraalVM, we further present five case studies that demonstrate how our approach helps tool developers to design and build tools for polyglot programming. We further show that TruffleSqueak can also be used by application developers to build and evolve polyglot applications at run-time and by language and runtime developers to understand the dynamic behavior of GraalVM languages and internals. Since our platform allows all these developers to apply polyglot programming, it can further help to better understand the advantages, use cases, requirements, and challenges of polyglot VMs. Moreover, we demonstrate that our approach can also be applied to other polyglot VMs and that insights gained through it are transferable to other programming systems.
We conclude that our research on tools for polyglot programming is an important step toward making polyglot VMs more approachable for developers in practice. With good tool support, we believe polyglot VMs can make it much more common for developers to take advantage of multiple languages and their ecosystems when building software.
The estimation of financial losses is an integral part of flood risk assessment. The application of existing flood loss models on locations or events different from the ones used to train the models has led to low performance, showing that characteristics of the flood damaging process have not been sufficiently well represented yet. To improve flood loss model transferability, I explore various model structures aiming at incorporating different (inland water) flood types and pathways. That is based on a large survey dataset of approximately 6000 flood-affected households which addresses several aspects of the flood event, not only the hazard characteristics but also information on the affected building, socioeconomic factors, the household's preparedness level, early warning, and impacts. Moreover, the dataset reports the coincidence of different flood pathways. Whilst flood types are a classification of flood events reflecting their generating process (e.g. fluvial, pluvial), flood pathways represent the route the water takes to reach the receptors (e.g. buildings). In this work, the following flood pathways are considered: levee breaches, river floods, surface water floods, and groundwater floods.
The coincidence of several hazard processes at the same time and place characterises a compound event. In fact, many flood events develop through several pathways, such as the ones addressed in the survey dataset used. Earlier loss models, although developed with one or multiple predictor variables, commonly use loss data from a single flood event which is attributed to a single flood type, disregarding specific flood pathways or the coincidence of multiple pathways. This gap is addressed by this thesis through the following research questions: 1. In which aspects do flood pathways of the same (compound inland) flood event differ? 2. How much do factors which contribute to the overall flood loss in a building differ in various settings, specifically across different flood pathways? 3. How well can Bayesian loss models learn from different settings? 4. Do compound, that is, coinciding flood pathways result in higher losses than a single pathway, and what does the outcome imply for future loss modelling?
Statistical analysis has found that households affected by different flood pathways also show, in general, differing characteristics of the affected building, preparedness, and early warning, besides the hazard characteristics. Forecasting and early warning capabilities and the preparedness of the population are dominated by the general flood type, but characteristics of the hazard at the object-level, the impacts, and the recovery are more related to specific flood pathways, indicating that risk communication and loss models could benefit from the inclusion of flood-pathway-specific information.
For the development of the loss model, several potentially relevant predictors are analysed: water depth, duration, velocity, contamination, early warning lead time, perceived knowledge about self-protection, warning information, warning source, gap between warning and action, emergency measures, implementation of property-level precautionary measures (PLPMs), perceived efficacy of PLPMs, previous flood experience, awareness of flood risk, ownership, building type, number of flats, building quality, building value, house/flat area, building area, cellar, age, household size, number of children, number of elderly residents, income class, socioeconomic status, and insurance against floods. After a variable selection, descriptors of the hazard, building, and preparedness were deemed significant, namely: water depth, contamination, duration, velocity, building area, building quality, cellar, PLPMs, perceived efficacy of PLPMs, emergency measures, insurance, and previous flood experience. The inclusion of the indicators of preparedness is relevant, as they are rarely involved in loss datasets and in loss modelling, although previous studies have shown their potential in reducing losses. In addition, the linear model fit indicates that the explanatory factors are, in several cases, differently relevant across flood pathways.
Next, Bayesian multilevel models were trained, which intrinsically incorporate uncertainties and allow for partial pooling (i.e. different groups of data, such as households affected by different flood pathways, can learn from each other), increasing the statistical power of the model. A new variable selection was performed for this new model approach, reducing the number of predictors from twelve to seven variables but keeping factors of the hazard, building, and preparedness, namely: water depth, contamination, duration, building area, PLPMs, insurance, and previous flood experience. The new model was trained not only across flood pathways but also across regions of Germany, divided according to general socioeconomic factors and insurance policies, and across flood events. The distinction across regions and flood events did not improve loss modelling and led to a large overlap of regression coefficients, with no clear trend or pattern. The distinction of flood pathways showed credibly distinct regression coefficients, leading to a better understanding of flood loss modelling and indicating one potential reason why model transferability has been challenging.
Finally, new model structures were trained to include the possibility of compound inland floods (i.e. when multiple flood pathways coincide on the same affected asset). The dataset does not allow for verifying in which sequence the flood pathway waves occurred and predictor variables reflect only their mixed or combined outcome. Thus, two Bayesian models were trained: 1. a multi-membership model, a structure which learns the regression coefficients for multiple flood pathways at the same time, and 2. a multilevel model wherein the combination of coinciding flood pathways makes individual categories. The multi-membership model resulted in credibly different coefficients across flood pathways but did not improve model performance in comparison to the model assuming only a single dominant flood pathway. The model with combined categories signals an increase in impacts after compound floods, but due to the uncertainty in model coefficients and estimates, it is not possible to ascertain such an increase as credible. That is, with the current level of uncertainty in differentiating the flood pathways, the loss estimates are not credibly distinct from individual flood pathways.
To overcome the challenges faced, non-linear or mixed models could be explored in the future. Interactions, moderation, and mediation effects, as well as non-linear effects, should also be further studied. Loss data collection should regularly include preparedness indicators, and either data collection or hydraulic modelling should focus on the distinction of coinciding flood pathways, which could inform loss models and further improve estimates. Flood pathways show distinct (financial) impacts, and their inclusion in loss modelling proves relevant, for it helps in clarifying the different contribution of influencing factors to the final loss, improving understanding of the damaging process, and indicating future lines of research.
Extending synchrotron X-ray refraction techniques to the quantitative analysis of metallic materials
(2022)
In this work, two X-ray refraction based imaging methods, namely, synchrotron X-ray refraction radiography (SXRR) and synchrotron X-ray refraction computed tomography (SXRCT), are applied to analyze quantitatively cracks and porosity in metallic materials. SXRR and SXRCT make use of the refraction of X-rays at inner surfaces of the material, e.g., the surfaces of cracks and pores, for image contrast. Both methods are, therefore, sensitive to smaller defects than their absorption based counterparts X-ray radiography and computed tomography. They can detect defects of nanometric size. So far the methods have been applied to the analysis of ceramic materials and fiber reinforced plastics. The analysis of metallic materials requires higher photon energies to achieve sufficient X-ray transmission due to their higher density. This causes smaller refraction angles and, thus, lower image contrast because the refraction index depends on the photon energy. Here, for the first time, a conclusive study is presented exploring the possibility to apply SXRR and SXRCT to metallic materials. It is shown that both methods can be optimized to overcome the reduced contrast due to smaller refraction angles. Hence, the only remaining limitation is the achievable X-ray transmission which is common to all X-ray imaging methods. Further, a model for the quantitative analysis of the inner surfaces is presented and verified.
For this purpose four case studies are conducted each posing a specific challenge to the imaging task. Case study A investigates cracks in a coupon taken from an aluminum weld seam. This case study primarily serves to verify the model for quantitative analysis and prove the sensitivity to sub-resolution features. In case study B, the damage evolution in an aluminum-based particle reinforced metal-matrix composite is analyzed. Here, the accuracy and repeatability of subsequent SXRR measurements is investigated showing that measurement errors of less than 3 % can be achieved. Further, case study B marks the fist application of SXRR in combination with in-situ tensile loading. Case study C is out of the highly topical field of additive manufacturing. Here, porosity in additively manufactured Ti-Al6-V4 is analyzed with a special interest in the pore morphology. A classification scheme based on SXRR measurements is devised which allows to distinguish binding defects from keyhole pores even if the defects cannot be spatially resolved. In case study D, SXRCT is applied to the analysis of hydrogen assisted cracking in steel. Due to the high X-ray attenuation of steel a comparatively high photonenergy of 50 keV is required here. This causes increased noise and lower contrast in the data compared to the other case studies. However, despite the lower data quality a quantitative analysis of the occurance of cracks in dependence of hydrogen content and applied mechanical load is possible.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
In plant cells, subcellular transport of cargo proteins relies to a large extent on post-Golgi transport pathways, many of which are mediated by clathrin-coated vesicles (CCVs). Vesicle formation is facilitated by different factors like accessory proteins and adaptor protein complexes (APs), the latter serving as a bridge between cargo proteins and the coat protein clathrin. One type of accessory proteins is defined by a conserved EPSIN N-TERMINAL HOMOLOGY (ENTH) domain and interacts with APs and clathrin via motifs in the C-terminal part. In Arabidopsis thaliana, there are three closely related ENTH domain proteins (EPSIN1, 2 and 3) and one highly conserved but phylogenetically distant outlier, termed MODIFIED TRANSPORT TO THE VACUOLE1 (MTV1). In case of the trans-Golgi network (TGN) located MTV1, clathrin association and a role in vacuolar transport have been shown previously (Sauer et al. 2013). In contrast, for EPSIN1 and EPSIN2 limited functional and localization data were available; and EPSIN3 remained completely uncharacterized prior to this study (Song et al. 2006; Lee et al. 2007). The molecular details of ENTH domain proteins in plants are still unknown. In order to systematically characterize all four ENTH proteins in planta, we first investigated expression and subcellular localization by analysis of stable reporter lines under their endogenous promotors. Although all four genes are ubiquitously expressed, their subcellular distribution differs markedly. EPSIN1 and MTV1 are located at the TGN, whereas EPSIN2 and EPSIN3 are associated with the plasma membrane (PM) and the cell plate. To examine potential functional redundancy, we isolated knockout T-DNA mutant lines and created all higher order mutant combinations. The clearest evidence for functional redundancy was observed in the epsin1 mtv1 double mutant, which is a dwarf displaying overall growth reduction. These findings are in line with the TGN localization of both MTV1 and EPS1. In contrast, loss of EPSIN2 and EPSIN3 does not result in a growth phenotype compared to wild type, however, a triple knockout of EPSIN1, EPSIN2 and EPSIN3 shows partially sterile plants. We focused mainly on the epsin1 mtv1 double mutant and addressed the functional role of these two genes in clathrin-mediated vesicle transport by comprehensive molecular, biochemical, and genetic analyses. Our results demonstrate that EPSIN1 and MTV1 promote vacuolar transport and secretion of a subset of cargo. However, they do not seem to be involved in endocytosis and recycling. Importantly, employing high-resolution imaging, genetic and biochemical experiments probing the relationship of the AP complexes, we found that EPSIN1/AP1 and MTV1/AP4 define two spatially and molecularly distinct subdomains of the TGN. The AP4 complex is essential for MTV1 recruitment to the TGN, whereas EPSIN1 is independent of AP4 but presumably acts in an AP1-dependent framework. Our findings suggest that this ENTH/AP pairing preference is conserved between animals and plants.
Functional traits determine biomass dynamics, coexistence and energetics in plankton food webs
(2022)
Plankton food webs are the basis of marine and limnetic ecosystems. Especially aquatic ecosystems of high biodiversity provide important ecosystem services for humankind as providers of food, coastal protection, climate regulation, and tourism. Understanding the dynamics of biomass and coexistence in these food webs is a first step to understanding the ecosystems. It also lays the foundation for the development of management strategies for the maintenance of the marine and freshwater biodiversity despite anthropogenic influences.
Natural food webs are highly complex, and thus often equally complex methods are needed to analyse and understand them well. Models can help to do so as they depict simplified parts of reality. In the attempt to get a broader understanding of the complex food webs, diverse methods are used to investigate different questions.
In my first project, we compared the energetics of a food chain in two versions of an allometric trophic network model. In particular, we solved the problem of unrealistically high trophic transfer efficiencies (up to 70%) by accounting for both basal respiration and activity respiration, which decreased the trophic transfer efficiency to realistic values of ≤30%. Next in my second project I turned to plankton food webs and especially phytoplankton traits. Investigating a long-term data set from Lake Constance we found evidence for a trade-off between defence and growth rate in this natural phytoplankton community. I continued working with this data set in my third project focusing on ciliates, the main grazer of phytoplankton in spring. Boosted regression trees revealed that temperature and predators have the highest influence on net growth rates of ciliates. We finally investigated in my fourth project a food web model inspired by ciliates to explore the coexistence of plastic competitors and to study the new concept of maladaptive switching, which revealed some drawbacks of plasticity: faster adaptation led to higher maladaptive switching towards undefended phenotypes which reduced autotroph biomass and coexistence and increased consumer biomass.
It became obvious that even well-established models should be critically questioned as it is important not to forget reality on the way to a simplistic model. The results showed furthermore that long-term data sets are necessary as they can help to disentangle complex natural processes. Last, one should keep in mind that the interplay between models and experiments/ field data can deliver fruitful insights about our complex world.
Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits.
The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called ‘interchange’. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers Péclet and Damköhler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration.
The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.
Giros Topográficos
(2022)
Giros topográficos explora las producciones simbólicas del espacio en una serie de textos narrativos publicados desde el cambio de milenio en América Latina. Retomando los planteos teóricos del spatial turn y de la geocrítica, el estudio aborda las topografías literarias desde cuatro ángulos que exceden y transforman los límites territoriales y nacionales: dinámicas de hiperconectividad mediática y movilidad acelerada; genealogías afectivas; ecologías urbanas; y representaciones de la alteridad.
A partir del análisis de obras de Lina Meruane, Guillermo Fadanelli, Andrés Neuman, Andrea Jeftanovic, Sergio Chejfech y Bernardo Carvalho, entre otros, el libro señala los flujos, ambigüedades y tensiones proyectadas por las nuevas comunidades imaginadas del s.XXI. Con ello, el ensayo busca ofrecer un aporte para repensar el estatus de la literatura latinoamericana en el marco de su globalización avanzada y la consecuente consolidación de espacios de enunciación translocalizados.
Global heat adaptation among urban populations and its evolution under different climate futures
(2022)
Heat and increasing ambient temperatures under climate change represent a serious threat to human health in cities. Heat exposure has been studied extensively at a global scale. Studies comparing a defined temperature threshold with the future daytime temperature during a certain period of time, had concluded an increase in threat to human health. Such findings however do not explicitly account for possible changes in future human heat adaptation and might even overestimate heat exposure. Thus, heat adaptation and its development is still unclear. Human heat adaptation refers to the local temperature to which populations are adjusted to. It can be inferred from the lowest point of the U- or V-shaped heat-mortality relationship (HMR), the Minimum Mortality Temperature (MMT). While epidemiological studies inform on the MMT at the city scale for case studies, a general model applicable at the global scale to infer on temporal change in MMTs had not yet been realised. The conventional approach depends on data availability, their robustness, and on the access to daily mortality records at the city scale. Thorough analysis however must account for future changes in the MMT as heat adaptation happens partially passively. Human heat adaptation consists of two aspects: (1) the intensity of the heat hazard that is still tolerated by human populations, meaning the heat burden they can bear and (2) the wealth-induced technological, social and behavioural measures that can be employed to avoid heat exposure. The objective of this thesis is to investigate and quantify human heat adaptation among urban populations at a global scale under the current climate and to project future adaptation under climate change until the end of the century. To date, this has not yet been accomplished. The evaluation of global heat adaptation among urban populations and its evolution under climate change comprises three levels of analysis. First, using the example of Germany, the MMT is calculated at the city level by applying the conventional method. Second, this thesis compiles a data pool of 400 urban MMTs to develop and train a new model capable of estimating MMTs on the basis of physical and socio-economic city characteristics using multivariate non-linear multivariate regression. The MMT is successfully described as a function of the current climate, the topography and the socio-economic standard, independently of daily mortality data for cities around the world. The city-specific MMT estimates represents a measure of human heat adaptation among the urban population. In a final third analysis, the model to derive human heat adaptation was adjusted to be driven by projected climate and socio-economic variables for the future. This allowed for estimation of the MMT and its change for 3 820 cities worldwide for different combinations of climate trajectories and socio-economic pathways until 2100. The knowledge on the evolution of heat adaptation in the future is a novelty as mostly heat exposure and its future development had been researched. In this work, changes in heat adaptation and exposure were analysed jointly. A wide range of possible health-related outcomes up to 2100 was the result, of which two scenarios with the highest socio-economic developments but opposing strong warming levels were highlighted for comparison. Strong economic growth based upon fossil fuel exploitation is associated with a high gain in heat adaptation, but may not be able to compensate for the associated negative health effects due to increased heat exposure in 30% to 40% of the cities investigated caused by severe climate change. A slightly less strong, but sustainable growth brings moderate gains in heat adaptation but a lower heat exposure and exposure reductions in 80% to 84% of the cities in terms of frequency (number of days exceeding the MMT) and intensity (magnitude of the MMT exceedance) due to a milder global warming. Choosing a 2 ° C compatible development by 2100 would therefore lower the risk of heat-related mortality at the end of the century. In summary, this thesis makes diverse and multidisciplinary contributions to a deeper understanding of human adaptation to heat under the current and the future climate. It is one of the first studies to carry out a systematic and statistical analysis of urban characteristics which are useful as MMT drivers to establish a generalised model of human heat adaptation, applicable at the global level. A broad range of possible heat-related health options for various future scenarios was shown for the first time. This work is of relevance for the assessment of heat-health impacts in regions where mortality data are not accessible or missing. The results are useful for health care planning at the meso- and macro-level and to urban- and climate change adaptation planning. Lastly, beyond having met the posed objective, this thesis advances research towards a global future impact assessment of heat on human health by providing an alternative method of MMT estimation, that is spatially and temporally flexible in its application.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Biofilms are complex living materials that form as bacteria get embedded in a matrix of self-produced protein and polysaccharide fibres. The formation of a network of extracellular biopolymer fibres contributes to the cohesion of the biofilm by promoting cell-cell attachment and by mediating biofilm-substrate interactions. This sessile mode of bacteria growth has been well studied by microbiologists to prevent the detrimental effects of biofilms in medical and industrial settings. Indeed, biofilms are associated with increased antibiotic resistance in bacterial infections, and they can also cause clogging of pipelines or promote bio-corrosion. However, biofilms also gained interest from biophysics due to their ability to form complex morphological patterns during growth. Recently, the emerging field of engineered living materials investigates biofilm mechanical properties at multiple length scales and leverages the tools of synthetic biology to tune the functions of their constitutive biopolymers.
This doctoral thesis aims at clarifying how the morphogenesis of Escherichia coli (E. coli) biofilms is influenced by their growth dynamics and mechanical properties. To address this question, I used methods from cell mechanics and materials science. I first studied how biological activity in biofilms gives rise to non-uniform growth patterns. In a second study, I investigated how E. coli biofilm morphogenesis and its mechanical properties adapt to an environmental stimulus, namely the water content of their substrate. Finally, I estimated how the mechanical properties of E. coli biofilms are altered when the bacteria express different extracellular biopolymers.
On nutritive hydrogels, micron-sized E. coli cells can build centimetre-large biofilms. During this process, bacterial proliferation and matrix production introduce mechanical stresses in the biofilm, which release through the formation of macroscopic wrinkles and delaminated buckles. To relate these biological and mechanical phenomena, I used time-lapse fluorescence imaging to track cell and matrix surface densities through the early and late stages of E. coli biofilm growth. Colocalization of high cell and matrix densities at the periphery precede the onset of mechanical instabilities at this annular region. Early growth is detected at this outer annulus, which was analysed by adding fluorescent microspheres to the bacterial inoculum. But only when high rates of matrix production are present in the biofilm centre, does overall biofilm spreading initiate along the solid-air interface. By tracking larger fluorescent particles for a long time, I could distinguish several kinematic stages of E. coli biofilm expansion and observed a transition from non-linear to linear velocity profiles, which precedes the emergence of wrinkles at the biofilm periphery. Decomposing particle velocities to their radial and circumferential components revealed a last kinematic stage, where biofilm movement is mostly directed towards the radial delaminated buckles, which verticalize. The resulting compressive strains computed in these regions were observed to substantially deform the underlying agar substrates. The co-localization of higher cell and matrix densities towards an annular region and the succession of several kinematic stages are thus expected to promote the emergence of mechanical instabilities at the biofilm periphery. These experimental findings are predicted to advance future modelling approaches of biofilm morphogenesis.
E. coli biofilm morphogenesis is further anticipated to depend on external stimuli from the environment. To clarify how the water could be used to tune biofilm material properties, we quantified E. coli biofilm growth, wrinkling dynamics and rigidity as a function of the water content of the nutritive substrates. Time-lapse microscopy and computational image analysis revealed that substrates with high water content promote biofilm spreading kinetics, while substrates with low water content promote biofilm wrinkling. The wrinkles observed on biofilm cross-sections appeared more bent on substrates with high water content, while they tended to be more vertical on substrates with low water content. Both wet and dry biomass, accumulated over 4 days of culture, were larger in biofilms cultured on substrates with high water content, despite extra porosity within the matrix layer. Finally, the micro-indentation analysis revealed that substrates with low water content supported the formation of stiffer biofilms. This study shows that E. coli biofilms respond to the water content of their substrate, which might be used for tuning their material properties in view of further applications.
Biofilm material properties further depend on the composition and structure of the matrix of extracellular proteins and polysaccharides. In particular, E. coli biofilms were suggested to present tissue-like elasticity due to a dense fibre network consisting of amyloid curli and phosphoethanolamine-modified cellulose. To understand the contribution of these components to the emergent mechanical properties of E. coli biofilms, we performed micro-indentation on biofilms grown from bacteria of several strains. Besides showing higher dry masses, larger spreading diameters and slightly reduced water contents, biofilms expressing both main matrix components also presented high rigidities in the range of several hundred kPa, similar to biofilms containing only curli fibres. In contrast, a lack of amyloid curli fibres provides much higher adhesive energies and more viscoelastic fluid-like material behaviour. Therefore, the combination of amyloid curli and phosphoethanolamine-modified cellulose fibres implies the formation of a composite material whereby the amyloid curli fibres provide rigidity to E. coli biofilms, whereas the phosphoethanolamine-modified cellulose rather acts as a glue. These findings motivate further studies involving purified versions of these protein and polysaccharide components to better understand how their interactions benefit biofilm functions.
All three studies depict different aspects of biofilm morphogenesis, which are interrelated. The first work reveals the correlation between non-uniform biological activities and the emergence of mechanical instabilities in the biofilm. The second work acknowledges the adaptive nature of E. coli biofilm morphogenesis and its mechanical properties to an environmental stimulus, namely water. Finally, the last study reveals the complementary role of the individual matrix components in the formation of a stable biofilm material, which not only forms complex morphologies but also functions as a protective shield for the bacteria it contains. Our experimental findings on E. coli biofilm morphogenesis and their mechanical properties can have further implications for fundamental and applied biofilm research fields.
Heimat
(2022)
Esta investigación propone un estudio transareal de las series autoficcionales del escritor austriaco Thomas Bernhard y el colombiano Fernando Vallejo, dos autores cuya obra se caracteriza por una dura crítica a sus países de origen, a sus Heimaten, pero también por un complejo arraigamiento. Los análisis interpretativos demuestran que en Die Autobiographie y El río del tiempo la Heimat se presenta como un constructo que abarca no solamente elementos dichosos, sino que presenta también elementos negativos, disolutivos, destructivos, con lo cual ambos autores de distancian de una concepción tradicional de Heimat como territorio necesariamente armónico al que el sujeto se siente positivamente vinculado. En cambio, ella se concibe como un conjunto disímil, frente al cual el sujeto se relaciona, necesariamente, de modo ambivalente y problemático. En ambos autores la narración literaria se configura como un acto en el que no simplemente se representa esa ambivalencia, sino en el que, sobre todo, se impugnan las formas de hostilidad que le confieren a la Heimat su carácter inhóspito. Para ello, ambos autores recurren a la implementación de dos recursos fundamentales: la mímesis y el movimiento. La investigación muestra de qué manera las obras estudiadas la Heimat se presenta como un espacio de continuos movimientos, intercambios e interacciones, en el que actúan mecanismos de opresión, pero también dispositivos de oposición, prácticas de apertura intersubjetiva y aspiraciones de integración comunitaria.
While estimated numbers of past and future climate migrants are alarming, the growing empirical evidence suggests that the association between adverse climate-related events and migration is not universally positive. This dissertation seeks to advance our understanding of when and how climate migration emerges by analyzing heterogeneous climatic influences on migration in low- and middle-income countries. To this end, it draws on established economic theories of migration, datasets from physical and social sciences, causal inference techniques and approaches from systematic literature review. In three of its five chapters, I estimate causal effects of processes of climate change on inequality and migration in India and Sub-Saharan Africa. By employing interaction terms and by analyzing sub-samples of data, I explore how these relationships differ for various segments of the population. In the remaining two chapters, I present two systematic literature reviews. First, I undertake a comprehensive meta-regression analysis of the econometric climate migration literature to summarize general climate migration patterns and explain the conflicting findings. Second, motivated by the broad range of approaches in the field, I examine the literature from a methodological perspective to provide best practice guidelines for studying climate migration empirically. Overall, the evidence from this dissertation shows that climatic influences on human migration are highly heterogeneous. Whether adverse climate-related impacts materialize in migration depends on the socio-economic characteristics of the individual households, such as wealth, level of education, agricultural dependence or access to adaptation technologies and insurance. For instance, I show that while adverse climatic shocks are generally associated with an increase in migration in rural India, they reduce migration in the agricultural context of Sub-Saharan Africa, where the average wealth levels are much lower so that households largely cannot afford the upfront costs of moving. I find that unlike local climatic shocks which primarily enhance internal migration to cities and hence accelerate urbanization, shocks transmitted via agricultural producer prices increase migration to neighboring countries, likely due to the simultaneous decrease in real income in nearby urban areas. These findings advance our current understanding by showing when and how economic agents respond to climatic events, thus providing explicit contexts and mechanisms of climate change effects on migration in the future. The resulting collection of findings can guide policy interventions to avoid or mitigate any present and future welfare losses from climate change-related migration choices.
Flares are magnetically driven explosions that occur in the atmospheres of all main sequence stars that possess an outer convection zone. Flaring activity is rooted in the magnetic dynamo that operates deep in the stellar interior, propagates through all layers of the atmosphere from the corona to the photosphere, and emits electromagnetic radiation from radio bands to X-ray. Eventually, this radiation, and associated eruptions of energetic particles, are ejected out into interplanetary space, where they impact planetary atmospheres, and dominate the space weather environments of young star-planet systems.
Thanks to the Kepler and the Transit Exoplanet Survey Satellite (TESS) missions, flare observations have become accessible for millions of stars and star-planet systems. The goal of this thesis is to use these flares as multifaceted messengers to understand stellar magnetism across the main sequence, investigate planetary habitability, and explore how close-in planets can affect the host star.
Using space based observations obtained by the Kepler/K2 mission, I found that flaring activity declines with stellar age, but this decline crucially depends on stellar mass and rotation. I calibrated the age of the stars in my sample using their membership in open clusters from zero age main sequence to solar age. This allowed me to reveal the rapid transition from an active, saturated flaring state to a more quiescent, inactive flaring behavior in early M dwarfs at about 600-800 Myr. This result is an important observational constraint on stellar activity evolution that I was able to de-bias using open clusters as an activity-independent age indicator.
The TESS mission quickly superseded Kepler and K2 as the main source of flares in low mass M dwarfs. Using TESS 2-minute cadence light curves, I developed a new technique for flare localization and discovered, against the commonly held belief, that flares do not occur uniformly across their stellar surface: In fast rotating fully convective stars, giant flares are preferably located at high latitudes. This bears implications for both our understanding of magnetic field emergence in these stars, and the impact on the exoplanet atmospheres: A planet that orbits in the equatorial plane of its host may be spared from the destructive effects of these poleward emitting flares.
AU Mic is an early M dwarf, and the most actively flaring planet host detected to date. Its innermost companion, AU Mic b is one of the most promising targets for a first observation of flaring star-planet interactions. In these interactions, the planet influences the star, as opposed to space weather, where the planet is always on the receiving side. The effect reflects the properties of the magnetosphere shared by planet and star, as well as the so far inaccessible magnetic properties of planets. In the about 50 days of TESS monitoring data of AU Mic, I searched for statistically robust signs of flaring interactions with AU Mic b as flares that occur in surplus of the star's intrinsic activity. I found the strongest yet still marginal signal in recurring excess flaring in phase with the orbital period of AU Mic b. If it reflects true signal, I estimate that extending the observing time by a factor of 2-3 will yield a statistically significant detection. Well within the reach of future TESS observations, this additional data may bring us closer to robustly detecting this effect than we have ever been.
This thesis demonstrates the immense scientific value of space based, long baseline flare monitoring, and the versatility of flares as a carrier of information about the magnetism of star-planet systems. Many discoveries still lay in wait in the vast archives that Kepler and TESS have produced over the years. Flares are intense spotlights into the magnetic structures in star-planet systems that are otherwise far below our resolution limits. The ongoing TESS mission, and soon PLATO, will further open the door to in-depth understanding of small and dynamic scale magnetic fields on low mass stars, and the space weather environment they effect.
The availability of commercial 3D printers and matching 3D design software has allowed a wide range of users to create physical prototypes – as long as these objects are not larger than hand size. However, when attempting to create larger, "human-scale" objects, such as furniture, not only are these machines too small, but also the commonly used 3D design software is not equipped to design with forces in mind — since forces increase disproportionately with scale.
In this thesis, we present a series of end-to-end fabrication software systems that support users in creating human-scale objects. They achieve this by providing three main functions that regular "small-scale" 3D printing software does not offer: (1) subdivision of the object into small printable components combined with ready-made objects, (2) editing based on predefined elements sturdy enough for larger scale, i.e., trusses, and (3) functionality for analyzing, detecting, and fixing structural weaknesses. The presented software systems also assist the fabrication process based on either 3D printing or steel welding technology.
The presented systems focus on three levels of engineering challenges: (1) fabricating static load-bearing objects, (2) creating mechanisms that involve motion, such as kinematic installations, and finally (3) designing mechanisms with dynamic repetitive movement where power and energy play an important role.
We demonstrate and verify the versatility of our systems by building and testing human-scale prototypes, ranging from furniture pieces, pavilions, to animatronic installations and playground equipment. We have also shared our system with schools, fablabs, and fabrication enthusiasts, who have successfully created human-scale objects that can withstand with human-scale forces.
Hydraulic-driven fractures play a key role in subsurface energy technologies across several scales. By injecting fluid at high hydraulic pressure into rock with intrinsic low permeability, in-situ stress field and fracture development pattern can be characterised as well as rock permeability can be enhanced. Hydraulic fracturing is a commercial standard procedure for enhanced oil and gas production of rock reservoirs with low permeability in petroleum industry. However, in EGS utilization, a major geological concern is the unsolicited generation of earthquakes due to fault reactivation, referred to as induced seismicity, with a magnitude large enough to be felt on the surface or to damage facilities and buildings. Furthermore, reliable interpretation of hydraulic fracturing tests for stress measurement is a great challenge for the energy technologies. Therefore, in this cumulative doctoral thesis the following research questions are investigated. (1): How do hydraulic fractures grow in hard rock at various scales?; (2): Which parameters control hydraulic fracturing and hydro-mechanical coupling?; and (3): How can hydraulic fracturing in hard rock be modelled?
In the laboratory scale study, several laboratory hydraulic fracturing experiments are investigated numerically using Irazu2D that were performed on intact cubic Pocheon granite samples from South Korea applying different injection protocols. The goal of the laboratory experiments is to test the concept of cyclic soft stimulation which may enable sustainable permeability enhancement (Publication 1).
In the borehole scale study, hydraulic fracturing tests are reported that were performed in boreholes located in central Hungary to determine the in-situ stress for a geological site investigation. At depth of about 540 m, the recorded pressure versus time curves in mica schist with low dip angle foliation show atypical evolution. In order to provide explanation for this observation, a series of discrete element computations using Particle Flow Code 2D are performed (Publication 2).
In the reservoir scale study, the hydro-mechanical behaviour of fractured crystalline rock due to one of the five hydraulic stimulations at the Pohang Enhanced Geothermal site in South Korea is studied. Fluid pressure perturbation at faults of several hundred-meter lengths during hydraulic stimulation is simulated using FracMan (Publication 3).
The doctoral research shows that the resulting hydraulic fracturing geometry will depend “locally”, i.e. at the length scale of representative elementary volume (REV) and below that (sub-REV), on the geometry and strength of natural fractures, and “globally”, i.e. at super-REV domain volume, on far-field stresses. Regarding hydro-mechanical coupling, it is suggested to define separate coupling relationship for intact rock mass and natural fractures. Furthermore, the relative importance of parameters affecting the magnitude of formation breakdown pressure, a parameter characterising hydro-mechanical coupling, is defined. It can be also concluded that there is a clear gap between the capacity of the simulation software and the complexity of the studied problems. Therefore, the computational time of the simulation of complex hydraulic fracture geometries must be reduced while maintaining high fidelity simulation results. This can be achieved either by extending the computational resources via parallelization techniques or using time scaling techniques. The ongoing development of used numerical models focuses on tackling these methodological challenges.
Diabetes is hallmarked by high blood glucose levels, which cause progressive generalised vascular damage, leading to microvascular and macrovascular complications. Diabetes-related complications cause severe and prolonged morbidity and are a major cause of mortality among people with diabetes. Despite increasing attention to risk factors of type 2 diabetes, existing evidence is scarce or inconclusive regarding vascular complications and research investigating both micro- and macrovascular complications is lacking. This thesis aims to contribute to current knowledge by identifying risk factors – mainly related to lifestyle – of vascular complications, addressing methodological limitations of previous literature and providing comparative data between micro- and macrovascular complications.
To address this overall aim, three specific objectives were set. The first was to investigate the effects of diabetes complication burden and lifestyle-related risk factors on the incidence of (further) complications. Studies suggest that diabetes complications are interrelated. However, they have been studied mainly independently of individuals’ complication burden. A five-state time-to-event model was constructed to examine the longitudinal patterns of micro- (kidney disease, neuropathy and retinopathy) and macrovascular complications (myocardial infarction and stroke) and their association with the occurrence of subsequent complications. Applying the same model, the effect of modifiable lifestyle factors, assessed alone and in combination with complication load, on the incidence of diabetes complications was studied. The selected lifestyle factors were body mass index (BMI), waist circumference, smoking status, physical activity, and intake of coffee, red meat, whole grains, and alcohol. Analyses were conducted in a cohort of 1199 participants with incident type 2 diabetes from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam, who were free of vascular complications at diabetes diagnosis. During a median follow-up time of 11.6 years, 96 cases of macrovascular complications (myocardial infarction and stroke) and 383 microvascular complications (kidney disease, neuropathy and retinopathy) were identified. In multivariable-adjusted models, the occurrence of a microvascular complication was associated with a higher incidence of further micro- (Hazard ratio [HR] 1.90; 95% Confidence interval [CI] 0.90, 3.98) and macrovascular complications (HR 4.72; 95% CI 1.25, 17.68), compared with persons without a complication burden. In addition, participants who developed a macrovascular event had a twofold higher risk of future microvascular complications (HR 2.26; 95% CI 1.05, 4.86). The models were adjusted for age, sex, state duration, education, lifestyle, glucose-lowering medication, and pre-existing conditions of hypertension and dyslipidaemia. Smoking was positively associated with macrovascular disease, while an inverse association was observed with higher coffee intake. Whole grain and alcohol intake were inversely associated with microvascular complications, and a U-shaped association was observed for red meat intake. BMI and waist circumference were positively associated with microvascular events. The associations between lifestyle factors and incidence of complications were not modified by concurrent complication burden, except for red meat intake and smoking status, where the associations were attenuated among individuals with a previous complication.
The second objective was to perform an in-depth investigation of the association between BMI and BMI change and risk of micro- and macrovascular complications. There is an ongoing debate on the association between obesity and risk of macrovascular and microvascular outcomes in type 2 diabetes, with studies suggesting a protective effect among people with overweight or obesity. These findings, however, might be limited due to suboptimal control for smoking, pre-existing chronic disease, or short-follow-up. After additional exclusion of persons with cancer history at diabetes onset, the associations between pre-diagnosis BMI and relative annual change between pre- and post-diagnosis BMI and incidence of complications were evaluated in multivariable-adjusted Cox models. The analyses were adjusted for age, sex, education, smoking status and duration, physical activity, alcohol consumption, adherence to the Mediterranean diet, and family history of diabetes and cardiovascular disease (CVD). Among 1083 EPIC-Potsdam participants, 85 macrovascular and 347 microvascular complications were identified during a median follow-up period of 10.8 years. Higher pre-diagnosis BMI was associated with an increased risk of total microvascular complications (HR per 5 kg/m2 1.21; 95% CI 1.07, 1.36), kidney disease (HR 1.39; 95% CI 1.21, 1.60) and neuropathy (HR 1.12; 95% CI 0.96, 1.31); but no association was observed for macrovascular complications (HR 1.05; 95% CI 0.81, 1.36). Effect modification was not evident by sex, smoking status, or age groups. In analyses according to BMI change categories, BMI loss of more than 1% indicated a decreased risk of total microvascular complications (HR 0.62; 95% CI 0.47, 0.80), kidney disease (HR 0.57; 95% CI 0.40, 0.81) and neuropathy (HR 0.73; 95% CI 0.52, 1.03), compared with participants with a stable BMI. No clear association was observed for macrovascular complications (HR 1.04; 95% CI 0.62, 1.74). The impact of BMI gain on diabetes-related vascular disease was less evident. Associations were consistent across strata of age, sex, pre-diagnosis BMI, or medication but appeared stronger among never-smokers than current or former smokers.
The last objective was to evaluate whether individuals with a high-risk profile for diabetes and cardiovascular disease (CVD) also have a greater risk of complications. Within the EPIC-Potsdam study, two accurate prognostic tools were developed, the German Diabetes Risk Score (GDRS) and the CVD Risk Score (CVDRS), which predict the 5-year type 2 diabetes risk and 10-year CVD risk, respectively. Both scores provide a non-clinical and clinical version. Components of the risk scores include age, sex, waist circumference, prevalence of hypertension, family history of diabetes or CVD, lifestyle factors, and clinical factors (only in clinical versions). The association of the risk scores with diabetes complications and their discriminatory performance for complications were assessed. In crude Cox models, both versions of GDRS and CVDRS were positively associated with macrovascular complications and total microvascular complications, kidney disease and neuropathy. Higher GDRS was also associated with an elevated risk of retinopathy. The discrimination of the scores (clinical and non-clinical) was poor for all complications, with the C-index ranging from 0.58 to 0.66 for macrovascular complications and from 0.60 to 0.62 for microvascular complications.
In conclusion, this work illustrates that the risk of complication development among individuals with type 2 diabetes is related to the existing complication load, and attention should be given to regular monitoring for future complications. It underlines the importance of weight management and adherence to healthy lifestyle behaviours, including high intake of whole grains, moderation in red meat and alcohol consumption and avoidance of smoking to prevent major diabetes-associated complications, regardless of complication burden. Risk scores predictive for type 2 diabetes and CVD were related to elevated risks of complications. By optimising several lifestyle and clinical factors, the risk score can be improved and may assist in lowering complication risk.
Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the Äspö Hard Rock Laboratory (Sweden).
I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1).
An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding.
To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the Äspö mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).
Weather extremes pose a persistent threat to society on multiple layers. Besides an average of ~37,000 deaths per year, climate-related disasters cause destroyed properties and impaired economic activities, eroding people's livelihoods and prosperity. While global temperature rises – caused by anthropogenic greenhouse gas emissions – the direct impacts of climatic extreme events increase and will further intensify without proper adaptation measures. Additionally, weather extremes do not only have local direct effects. Resulting economic repercussions can propagate either upstream or downstream along trade chains causing indirect effects. One approach to analyze these indirect effects within the complex global supply network is the agent-based model Acclimate. Using and extending this loss-propagation model, I focus in this thesis on three aspects of the relation between weather extremes and economic repercussions.
First, extreme weather events cause direct impacts on local economic performance. I compute daily local direct output loss time series of heat stress, river floods, tropical cyclones, and their consecutive occurrence using (near-future) climate projection ensembles. These regional impacts are estimated based on physical drivers and local productivity distribution. Direct effects of the aforementioned disaster categories are widely heterogeneous concerning regional and temporal distribution. As well, their intensity changes differently under future warming. Focusing on the hurricane-impacted capital, I find that long-term growth losses increase with higher heterogeneity of a shock ensemble.
Second, repercussions are sectorally and regionally distributed via economic ripples within the trading network, causing higher-order effects. I use Acclimate to identify three phases of those economic ripples. Furthermore, I compute indirect impacts and analyze overall regional and global production and consumption changes. Regarding heat stress, global consumer losses double while direct output losses increase by a factor 1.5 between 2000 – 2039. In my research I identify the effect of economic ripple resonance and introduce it to climate impact research. This effect occurs if economic ripples of consecutive disasters overlap, which increases economic responses such as an enhancement of consumption losses. These loss enhancements can even be more amplified with increasing direct output losses, e.g. caused by climate crises.
Transport disruptions can cause economic repercussions as well. For this, I extend the model Acclimate with a geographical transportation route and expand the decision horizon of economic agents. Using this, I show that policy-induced sudden trade restrictions (e.g. a no-deal Brexit) can significantly reduce the longer-term economic prosperity of affected regions. Analyses of transportation disruptions in typhoon seasons indicate that severely affected regions must reduce production as demand falls during a storm. Substituting suppliers may compensate for fluctuations at the beginning of the storm, which fails for prolonged disruptions.
Third, possible coping mechanisms and adaptation strategies arise from direct and indirect economic responses to weather extremes. Analyzing annual trade changes due to typhoon-induced transport disruptions depict that overall exports rise. This trade resilience increases with higher network node diversification. Further, my research shows that a basic insurance scheme may diminish hurricane-induced long-term growth losses due to faster reconstruction in disasters aftermaths. I find that insurance coverage could be an economically reasonable coping scheme towards higher losses caused by the climate crisis. Indirect effects within the global economic network from weather extremes indicate further adaptation possibilities. For one, diversifying linkages reduce the hazard of sharp price increases. Next to this, close economic interconnections with regions that do not share the same extreme weather season can be economically beneficial in the medium run. Furthermore, economic ripple resonance effects should be considered while computing costs. Overall, an increase in local adaptation measures reduces economic ripples within the trade network and possible losses elsewhere. In conclusion, adaptation measures are necessary and potential present, but it seems rather not possible to avoid all direct or indirect losses.
As I show in this thesis, dynamical modeling gives valuable insights into how direct and indirect economic impacts arise from different categories of weather extremes. Further, it highlights the importance of resolving individual extremes and reflecting amplifying effects caused by incomplete recovery or consecutive disasters.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
The heterogeneity of today's state-of-the-art computer architectures is confronting application developers with an immense degree of complexity which results from two major challenges. First, developers need to acquire profound knowledge about the programming models or the interaction models associated with each type of heterogeneous system resource to make efficient use thereof. Second, developers must take into account that heterogeneous system resources always need to exchange data with each other in order to work on a problem together. However, this data exchange is always associated with a certain amount of overhead, which is why the amounts of data exchanged should be kept as low as possible.
This thesis proposes three programming abstractions to lessen the burdens imposed by these major challenges with the goal of making heterogeneous system resources accessible to a wider range of application developers. The lib842 compression library provides the first method for accessing the compression and decompression facilities of the NX-842 on-chip compression accelerator available in IBM Power CPUs from user space applications running on Linux. Addressing application development of scale-out GPU workloads, the CloudCL framework makes the resources of GPU clusters more accessible by hiding many aspects of distributed computing while enabling application developers to focus on the aspects of the data parallel programming model associated with GPUs. Furthermore, CloudCL is augmented with transparent data compression facilities based on the lib842 library in order to improve the efficiency of data transfers among cluster nodes. The improved data transfer efficiency provided by the integration of transparent data compression yields performance improvements ranging between 1.11x and 2.07x across four data-intensive scale-out GPU workloads. To investigate the impact of programming abstractions for data placement in NUMA systems, a comprehensive evaluation of the PGASUS framework for NUMA-aware C++ application development is conducted. On a wide range of test systems, the evaluation demonstrates that PGASUS does not only improve the developer experience across all workloads, but that it is also capable of outperforming NUMA-agnostic implementations with average performance improvements of 1.56x.
Based on these programming abstractions, this thesis demonstrates that by providing a sufficient degree of abstraction, the accessibility of heterogeneous system resources can be improved for application developers without occluding performance-critical properties of the underlying hardware.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
The negative impact of crude oil on the environment has led to a necessary transition toward alternative, renewable, and sustainable resources. In this regard, lignocellulosic biomass (LCB) is a promising renewable and sustainable alternative to crude oil for the production of fine chemicals and fuels in a so-called biorefinery process. LCB is composed of polysaccharides (cellulose and hemicellulose), as well as aromatics (lignin). The development of a sustainable and economically advantageous biorefinery depends on the complete and efficient valorization of all components. Therefore, in the new generation of biorefinery, the so-called biorefinery of type III, the LCB feedstocks are selectively deconstructed and catalytically transformed into platform chemicals. For this purpose, the development of highly stable and efficient catalysts is crucial for progress toward viability in biorefinery. Furthermore, a modern and integrated biorefinery relies on process and reactor design, toward more efficient and cost-effective methodologies that minimize waste. In this context, the usage of continuous flow systems has the potential to provide safe, sustainable, and innovative transformations with simple process integration and scalability for biorefinery schemes.
This thesis addresses three main challenges for future biorefinery: catalyst synthesis, waste feedstock valorization, and usage of continuous flow technology. Firstly, a cheap, scalable, and sustainable approach is presented for the synthesis of an efficient and stable 35 wt.-% Ni catalyst on highly porous nitrogen-doped carbon support (35Ni/NDC) in pellet shape. Initially, the performance of this catalyst was evaluated for the aqueous phase hydrogenation of LCB-derived compounds such as glucose, xylose, and vanillin in continuous flow systems. The 35Ni/NDC catalyst exhibited high catalytic performances in three tested hydrogenation reactions, i.e., sorbitol, xylitol, and 2-methoxy-4-methylphenol with yields of 82 mol%, 62 mol%, and 100 mol% respectively. In addition, the 35Ni/NDC catalyst exhibited remarkable stability over a long time on stream in continuous flow (40 h). Furthermore, the 35Ni/NDC catalyst was combined with commercially available Beta zeolite in a dual–column integrated process for isosorbide production from glucose (yield 83 mol%).
Finally, 35Ni/NDC was applied for the valorization of industrial waste products, namely sodium lignosulfonate (LS) and beech wood sawdust (BWS) in continuous flow systems. The LS depolymerization was conducted combining solvothermal fragmentation of water/alcohol mixtures (i.e.,methanol/water and ethanol/water) with catalytic hydrogenolysis/hydrogenation (SHF). The depolymerization was found to occur thermally in absence of catalyst with a tunable molecular weight according to temperature. Furthermore, the SHF generated an optimized cumulative yield of lignin-derived phenolic monomers of 42 mg gLS-1. Similarly, a solvothermal and reductive catalytic fragmentation (SF-RCF) of BWS was conducted using MeOH and MeTHF as a solvent. In this case, the optimized total lignin-derived phenolic monomers yield was found of 247 mg gKL-1.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
The NAC transcription factor (TF) JUNGBRUNNEN1 (JUB1) is an important negative regulator of plant senescence, as well as of gibberellic acid (GA) and brassinosteroid (BR) biosynthesis in Arabidopsis thaliana. Overexpression of JUB1 promotes longevity and enhances tolerance to drought and other abiotic stresses. A similar role of JUB1 has been observed in other plant species, including tomato and banana. Our data show that JUB1 overexpressors (JUB1-OXs) accumulate higher levels of proline than WT plants under control conditions, during the onset of drought stress, and thereafter. We identified that overexpression of JUB1 induces key proline biosynthesis and suppresses key proline degradation genes. Furthermore, bZIP63, the transcription factor involved in proline metabolism, was identified as a novel downstream target of JUB1 by Yeast One-Hybrid (Y1H) analysis and Chromatin immunoprecipitation (ChIP). However, based on Electrophoretic Mobility Shift Assay (EMSA), direct binding of JUB1 to bZIP63 could not be confirmed. Our data indicate that JUB1-OX plants exhibit reduced stomatal conductance under control conditions. However, selective overexpression of JUB1 in guard cells did not improve drought stress tolerance in Arabidopsis. Moreover, the drought-tolerant phenotype of JUB1 overexpressors does not solely depend on the transcriptional control of the DREB2A gene. Thus, our data suggest that JUB1 confers tolerance to drought stress by regulating multiple components. Until today, none of the previous studies on JUB1´s regulatory network focused on identifying protein-protein interactions. We, therefore, performed a yeast two-hybrid screen (Y2H) which identified several protein interactors of JUB1, two of which are the calcium-binding proteins CaM1 and CaM4. Both proteins interact with JUB1 in the nucleus of Arabidopsis protoplasts. Moreover, JUB1 is expressed with CaM1 and CaM4 under the same conditions. Since CaM1.1 and CaM4.1 encode proteins with identical amino acid sequences, all further experiments were performed with constructs involving the CaM4 coding sequence. Our data show that JUB1 harbors multiple CaM-binding sites, which are localized in both the N-terminal and C-terminal regions of the protein. One of the CaM-binding sites, localized in the DNA-binding domain of JUB1, was identified as a functional CaM-binding site since its mutation strongly reduced the binding of CaM4 to JUB1. Furthermore, JUB1 transactivates expression of the stress-related gene DREB2A in mesophyll cells; this effect is significantly reduced when the calcium-binding protein CaM4 is expressed as well. Overexpression of both genes in Arabidopsis results in early senescence observed through lower chlorophyll content and an enhanced expression of senescence-associated genes (SAGs) when compared with single JUB1 overexpressors. Our data also show that JUB1 and CaM4 proteins interact in senescent leaves, which have increased Ca2+ levels when compared to young leaves. Collectively, our data indicate that JUB1 activity towards its downstream targets is fine-tuned by calcium-binding proteins during leaf senescence.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
The doctoral thesis presented provides a comprehensive view of laser-based ablation techniques promoted to new fields of operation, including, but not limited to, size, composition, and concentration analyses. It covers various applications of laser ablation techniques over a wide range of sizes, from single molecules all the way to aerosol particles. The research for this thesis started with broadening and deepening the field of application and the fundamental understanding of liquid-phase IR-MALDI. Here, the hybridization of ion mobility spectrometry and microfluidics was realized by using IR-MALDI as the coupling technique for the first time. The setup was used for monitoring the photocatalytic performance of the E-Z isomerization of olefins. Using this hybrid, measurement times were so drastically reduced that such photocatalyst screenings became a matter of minutes rather than hours. With this on hand, triple measurements screenings could not only be performed within ten minutes, but also with a minimum amount of resources highlighting its potential as a green chemistry alternative to batch-sized reactions. Along the optimizing process of the IR-MALDI source for microfluidics came its application for another liquid sample supply method, the hanging drop. This demarcated one of the first applications of IR-MALDI for the charging of sub-micron particles directly from suspensions via their gas-phase transfer, followed by their characterization with differential mobility analysis. Given the high spectral quality of the data up to octuply charged particles became experimentally accessible, this laid the foundation for deriving a new charge distribution model for IR-MALDI in that size regime. Moving on to even larger analyte sizes, LIBS and LII were employed as ablation techniques for the solid phase, namely the aerosol particles themselves. Both techniques produce light-emitting events and were used to quantify and classify different aerosols. The unique configuration of stroboscopic imaging, photoacoustics, LII, and LIBS measurements opened new realms for analytical synergies and their potential application in industry. The concept of using low fluences, below 100 J/cm2, and high repetition rates of up to 500 Hz for LIBS makes for an excellent phase-selective LIBS setup. This concept was combined with a new approach to the photoacoustic normalization of LIBS. Also, it was possible to acquire statistically relevant amounts of data in a matter of seconds, showing its potential as a real-time optimization technique. On the same time axis, but at much lower fluences, LII was used with a similar methodology to quickly quantify and classify airborne particles of different compositions. For the first time, aerosol particles were evaluated on their LII susceptibility by using a fluence screening approach.
Ein schonender Umgang mit den Ressourcen und der Umwelt ist wesentlicher Bestandteil des modernen Bergbaus sowie der zukünftigen Versorgung unserer Gesellschaft mit essentiellen Rohstoffen. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung analytischer Strategien, die durch eine exakte und schnelle Vor-Ort-Analyse den technisch-praktischen Anforderungen des Bergbauprozesses gerecht werden und somit zu einer gezielten und nachhaltigen Nutzung von Rohstofflagerstätten beitragen. Die Analysen basieren auf den spektroskopischen Daten, die mittels der laserinduzierten Breakdownspektroskopie (LIBS) erhalten und mittels multivariater Datenanalyse ausgewertet werden. Die LIB-Spektroskopie ist eine vielversprechende Technik für diese Aufgabe. Ihre Attraktivität machen insbesondere die Möglichkeiten aus, Feldproben vor Ort ohne Probennahme oder ‑vorbereitung messen zu können, aber auch die Detektierbarkeit sämtlicher Elemente des Periodensystems und die Unabhängigkeit vom Aggregatzustand. In Kombination mit multivariater Datenanalyse kann eine schnelle Datenverarbeitung erfolgen, die Aussagen zur qualitativen Elementzusammensetzung der untersuchten Proben erlaubt. Mit dem Ziel die Verteilung der Elementgehalte in einer Lagerstätte zu ermitteln, werden in dieser Arbeit Kalibrierungs- und Quantifizierungsstrategien evaluiert. Für die Charakterisierung von Matrixeffekten und zur Klassifizierung von Mineralen werden explorative Datenanalysemethoden angewendet. Die spektroskopischen Untersuchungen erfolgen an Böden und Gesteinen sowie an Mineralen, die Kupfer oder Seltene Erdelemente beinhalten und aus verschiedenen Lagerstätten bzw. von unterschiedlichen Agrarflächen stammen.
Für die Entwicklung einer Kalibrierungsstrategie wurden sowohl synthetische als auch Feldproben von zwei verschiedenen Agrarflächen mittels LIBS analysiert. Anhand der Beispielanalyten Calcium, Eisen und Magnesium erfolgte die auf uni- und multivariaten Methoden beruhende Evaluierung verschiedener Kalibrierungsmethoden. Grundlagen der Quantifizierungsstrategien sind die multivariaten Analysemethoden der partiellen Regression der kleinsten Quadrate (PLSR, von engl.: partial least squares regression) und der Intervall PLSR (iPLSR, von engl.: interval PLSR), die das gesamte detektierte Spektrum oder Teilspektren in der Analyse berücksichtigen. Der Untersuchung liegen synthetische sowie Feldproben von Kupfermineralen zugrunde als auch solche die Seltene Erdelemente beinhalten. Die Proben stammen aus verschiedenen Lagerstätten und weisen unterschiedliche Begleitmatrices auf. Mittels der explorativen Datenanalyse erfolgte die Charakterisierung dieser Begleitmatrices. Die dafür angewendete Hauptkomponentenanalyse gruppiert Daten anhand von Unterschieden und Regelmäßigkeiten. Dies erlaubt Aussagen über Gemeinsamkeiten und Unterschiede der untersuchten Proben im Bezug auf ihre Herkunft, chemische Zusammensetzung oder lokal bedingte Ausprägungen. Abschließend erfolgte die Klassifizierung kupferhaltiger Minerale auf Basis der nicht-negativen Tensorfaktorisierung. Diese Methode wurde mit dem Ziel verwendet, unbekannte Proben aufgrund ihrer Eigenschaften in Klassen einzuteilen.
Die Verknüpfung von LIBS und multivariater Datenanalyse bietet die Möglichkeit durch eine Analyse vor Ort auf eine Probennahme und die entsprechende Laboranalytik weitestgehend zu verzichten und kann somit zum Umweltschutz sowie einer Schonung der natürlichen Ressourcen bei der Prospektion und Exploration von neuen Erzgängen und Lagerstätten beitragen. Die Verteilung von Elementgehalten der untersuchten Gebiete ermöglicht zudem einen gezielten Abbau und damit eine effiziente Nutzung der mineralischen Rohstoffe.
Die vorliegende kumulative Promotionsarbeit beschäftigt sich mit leistungsstarken Schülerinnen und Schülern, die seit 2015 in der deutschen Bildungspolitik, zum Beispiel im Rahmen von Förderprogrammen wieder mehr Raum einnehmen, nachdem in Folge des „PISA-Schocks“ im Jahr 2000 zunächst der Fokus stärker auf den Risikogruppen lag. Während leistungsstärkere Schülerinnen und Schüler in der öffentlichen Wahrnehmung häufig mit „(Hoch-)Begabten“ identifiziert werden, geht die Arbeit über die traditionelle Begabungsforschung, die eine generelle Intelligenz als Grundlage für Leistungsfähigkeit von Schülerinnen und Schülern begreift und beforscht, hinaus. Stattdessen lässt sich eher in den Bereich der Talentforschung einordnen, die den Fokus weg von allgemeinen Begabungen auf spezifische Prädiktoren und Outcomes im individuellen Entwicklungsverlauf legt. Der Fokus der Arbeit liegt daher nicht auf Intelligenz als Potenzial, sondern auf der aktuellen schulischen Leistung, die als Ergebnis und Ausgangspunkt von Entwicklungsprozessen in einer Leistungsdomäne doppelte Bedeutung erhält.
Die Arbeit erkennt die Vielgestaltigkeit des Leistungsbegriffs an und ist bestrebt, neue Anlässe zu schaffen, über den Leistungsbegriff und seine Operationalisierung in der Forschung zu diskutieren. Hierfür wird im ersten Teil ein systematisches Review zur Operationalisierung von Leistungsstärke durchgeführt (Artikel I). Es werden Faktoren herausgearbeitet, auf welchen sich die Operationalisierungen unterscheiden können. Weiterhin wird ein Überblick gegeben, wie Studien zu Leistungsstarken sich seit dem Jahr 2000 auf diesen Dimensionen verorten lassen. Es zeigt sich, dass eindeutige Konventionen zur Definition schulischer Leistungsstärke noch nicht existieren, woraus folgt, dass Ergebnisse aus Studien, die sich mit leistungsstarken Schülerinnen und Schülern beschäftigen, nur bedingt miteinander vergleichbar sind. Im zweiten Teil der Arbeit wird im Rahmen zwei weiterer Artikel, welche sich mit der Leistungsentwicklung (Artikel II) und der sozialen Einbindung (Artikel III) von leistungsstarken Schülerinnen und Schülern befassen, darauf aufbauend der Ansatz verfolgt, die Variabilität von Ergebnissen über verschiedene Operationalisierungen von Leistungsstärke deutlich zu machen. Damit wird unter anderem auch die künftige Vergleichbarkeit mit anderen Studien erleichtert. Genutzt wird dabei das Konzept der Multiversumsanalyse (Steegen et al., 2016), bei welcher viele parallele Spezifikationen, die zugleich sinnvolle Alternativen für die Operationalisierung darstellen, nebeneinandergestellt und in ihrem Effekt verglichen werden (Jansen et al., 2021). Die Multiversumsanalyse knüpft konzeptuell an das bereits vor längerem entwickelte Forschungsprogramm des kritischen Multiplismus an (Patry, 2013; Shadish, 1986, 1993), erhält aber als spezifische Methode aktuell im Rahmen der Replizierbarkeitskrise in der Psychologie eine besondere Bedeutung. Dabei stützt sich die vorliegende Arbeit auf die Sekundäranalyse großangelegter Schulleistungsstudien, welche den Vorteil besitzen, dass eine große Zahl an Datenpunkten (Variablen und Personen) zur Verfügung steht, um Effekte unterschiedlicher Operationalisierungen zu vergleichen.
Inhaltlich greifen Artikel II und III Themen auf, die in der wissenschaftlichen und gesellschaftlichen Diskussion zu Leistungsstarken und ihrer Wahrnehmung in der Öffentlichkeit immer wieder aufscheinen: In Artikel II wird zunächst die Frage gestellt, ob Leistungsstarke bereits im aktuellen Regelunterricht einen kumulativen Vorteil gegenüber ihren weniger leistungsstarken Mitschülerinnen und Mitschülern haben (Matthäus-Effekt). Die Ergebnisse zeigen, dass an Gymnasien keineswegs von sich vergrößernden Unterschieden gesprochen werden kann. Im Gegenteil, es verringerte sich im Laufe der Sekundarstufe der Abstand zwischen den Gruppen, indem die Lernraten bei leistungsschwächeren Schülerinnen und Schülern höher waren. Artikel III hingegen betrifft die soziale Wahrnehmung von leistungsstarken Schülerinnen und Schülern. Auch hier hält sich in der öffentlichen Diskussion die Annahme, dass höhere Leistungen mit Nachteilen in der sozialen Integration einhergehen könnten, was sich auch in Studien widerspiegelt, die sich mit Geschlechterstereotypen Jugendlicher in Bezug auf Schulleistung beschäftigen. In Artikel III wird unter anderem erneut das Potenzial der Multiversumsanalyse genutzt, um die Variation des Zusammenhangs über Operationalisierungen von Leistungsstärke zu beschreiben. Es zeigt sich unter unterschiedlichen Operationalisierungen von Leistungsstärke und über verschiedene Facetten sozialer Integration hinweg, dass die Zusammenhänge zwischen Leistung und sozialer Integration insgesamt leicht positiv ausfallen. Annahmen, die auf differenzielle Effekte für Jungen und Mädchen oder für unterschiedliche Fächer abzielen, finden in diesen Analysen keine Bestätigung.
Die Dissertation zeigt, dass der Vergleich unterschiedlicher Ansätze zur Operationalisierung von Leistungsstärke — eingesetzt im Rahmen eines kritischen Multiplismus — das Verständnis von Phänomenen vertiefen kann und auch das Potenzial hat, Theorieentwicklung voranzubringen.
Li and B in ascending magmas: an experimental study on their mobility and isotopic fractionation
(2022)
This research study focuses on the behaviour of Li and B during magmatic ascent, and decompression-driven degassing related to volcanic systems. The main objective of this dissertation is to determine whether it is possible to use the diffusion properties of the two trace elements as a tool to trace magmatic ascent rate. With this objective, diffusion-couple and decompression experiments have been performed in order to study Li and B mobility in intra-melt conditions first, and then in an evolving system during decompression-driven degassing.
Synthetic glasses were prepared with rhyolitic composition and an initial water content of 4.2 wt%, and all the experiments were performed using an internally heated pressure vessel, in order to ensure a precise control on the experimental parameters such as temperature and pressure.
Diffusion-couple experiments were performed with a fix pressure 300 MPa. The temperature was varied in the range of 700-1250 °C with durations between 0 seconds and 24 hours. The diffusion-couple results show that Li diffusivity is very fast and starts already at very low temperature. Significant isotopic fractionation occurs due to the faster mobility of 6Li compared to 7Li. Boron diffusion is also accelerated by the presence of water, but the results of the isotopic ratios are unclear, and further investigation would be necessary to well constrain the isotopic fractionation process of boron in hydrous silicate melts. The isotopic ratios results show that boron isotopic fractionation might be affected by the speciation of boron in the silicate melt structure, as 10B and 11B tend to have tetrahedral and trigonal coordination, respectively.
Several decompression experiments were performed at 900 °C and 1000 °C, with pressures going from 300 MPa to 71-77 MPa and durations of 30 minutes, two, five and ten hours, in order to trigger water exsolution and the formation of vesicles in the sample. Textural observations and the calculation of the bubble number density confirmed that the bubble size and distribution after decompression is directly proportional to the decompression rate.
The overall SIMS results of Li and B show that the two trace elements tend to progressively decrease their concentration with decreasing decompression rates. This is explained because for longer decompression times, the diffusion of Li and B into the bubbles has more time to progress and the melt continuously loses volatiles as the bubbles expand their volumes.
For fast decompression, Li and B results show a concentration increase with a δ7Li and δ11B decrease close to the bubble interface, related to the sudden formation of the gas bubble, and the occurrence of a diffusion process in the opposite direction, from the bubble meniscus to the unaltered melt. When the bubble growth becomes dominant and Li and B start to exsolve into the gas phase, the silicate melt close to the bubble gets depleted in Li and B, because of a stronger diffusion of the trace elements into the bubble.
Our data are being applied to different models, aiming to combine the dynamics of bubble nucleation and growth with the evolution of trace elements concentration and isotopic ratios. Here, first considerations on these models will be presented, giving concluding remarks on this research study. All in all, the final remarks constitute a good starting point for further investigations. These results are a promising base to continue to study this process, and Li and B can indeed show clear dependences on decompression-related magma ascent rates in volcanic systems.
The development of novel programmable materials aiming to control friction in real-time holds potential to facilitate innovative lubrication solutions for reducing wear and energy losses. This work describes the integration of light-responsiveness into two lubricating materials, silicon oils and polymer brush surfaces.
The first part focusses on the assessment on 9-anthracene ester-terminated polydimethylsiloxanes (PDMS-A) and, in particular, on the variability of rheological properties and the implications that arise with UV-light as external trigger. The applied rheometer setup contains an UV-transparent quartz-plate, which enables radiation and simultaneous measurement of the dynamic moduli. UV-A radiation (354 nm) triggers the cycloaddition reaction between the terminal functionalities of linear PDMS, resulting in chain extension. The newly-formed anthracene dimers cleave by UV-C radiation (254 nm) or at elevated temperatures (T > 130 °C). The sequential UV-A radiation and thermal reprogramming over three cycles demonstrate high conversions and reproducible programming of rheological properties. In contrast, the photochemical back reaction by UV-C is incomplete and can only partially restore the initial rheological properties. The dynamic moduli increase with each cycle in photochemical programming, presumably resulting from a chain segment re-arrangement as a result of the repeated partial photocleavage and subsequent chain length-dependent dimerization. In addition, long periods of radiation cause photooxidative degradation, which damages photo-responsive functions and consequently reduces the programming range. The absence of oxygen, however, reduces undesired side reactions. Anthracene-functionalized PDMS and native PDMS mix depending on the anthracene ester content and chain length, respectively, and allow fine-tuning of programmable rheological properties. The work shows the influence of mixing conditions during the photoprogramming step on the rheological properties, indicating that material property gradients induced by light attenuation along the beam have to be considered. Accordingly, thin lubricant films are suggested as potential application for light-programmable silicon fluids.
The second part compares strategies for the grafting of spiropyran (SP) containing copolymer brushes from Si wafers and evaluates the light-responsiveness of the surfaces. Pre-experiments on the kinetics of the thermally initiated RAFT copolymerization of 2-hydroxyethyl acrylate (HEA) and spiropyran acrylate (SPA) in solution show, first, a strong retardation by SP and, second, the dependence of SPA polymerization on light. Surprisingly, the copolymerization of SPA is inhibited in the dark. These findings contribute to improve the synthesis of polar, spiropyran-containing copolymers. The comparison between initiator systems for the grafting-from approach indicates PET-RAFT superior to thermally initiated RAFT, suggesting a more efficient initiation of surface-bound CTA by light. Surface-initiated polymerization via PET-RAFT with an initiator system of EosinY (EoY) and ascorbic acid (AscA) facilitates copolymer synthesis from HEA and 5-25 mol% SPA. The resulting polymer film with a thickness of a few nanometers was detected by atomic force microscopy (AFM) and ellipsometry. Water contact angle (CA) measurements demonstrate photo-switchable surface polarity, which is attributed to the photoisomerization between non-polar spiropyran and zwitterionic merocyanine isomer. Furthermore, the obtained spiropyran brushes show potential for further studies on light-programmable properties. In this context, it would be interesting to investigate whether swollen spiropyran-containing polymers change their configuration and thus their film thickness under the influence of light. In addition, further experiments using an AFM or microtribometer should evaluate whether light-programmable solvation enables a change in frictional properties between polymer brush surfaces.
Text collections, such as corpora of books, research articles, news, or business documents are an important resource for knowledge discovery. Exploring large document collections by hand is a cumbersome but necessary task to gain new insights and find relevant information. Our digitised society allows us to utilise algorithms to support the information seeking process, for example with the help of retrieval or recommender systems. However, these systems only provide selective views of the data and require some prior knowledge to issue meaningful queries and asses a system’s response. The advancements of machine learning allow us to reduce this gap and better assist the information seeking process. For example, instead of sighting countless business documents by hand, journalists and investigator scan employ natural language processing techniques, such as named entity recognition. Al-though this greatly improves the capabilities of a data exploration platform, the wealth of information is still overwhelming. An overview of the entirety of a dataset in the form of a two-dimensional map-like visualisation may help to circumvent this issue. Such overviews enable novel interaction paradigms for users, which are similar to the exploration of digital geographical maps. In particular, they can provide valuable context by indicating how apiece of information fits into the bigger picture.This thesis proposes algorithms that appropriately pre-process heterogeneous documents and compute the layout for datasets of all kinds. Traditionally, given high-dimensional semantic representations of the data, so-called dimensionality reduction algorithms are usedto compute a layout of the data on a two-dimensional canvas. In this thesis, we focus on text corpora and go beyond only projecting the inherent semantic structure itself. Therefore,we propose three dimensionality reduction approaches that incorporate additional information into the layout process: (1) a multi-objective dimensionality reduction algorithm to jointly visualise semantic information with inherent network information derived from the underlying data; (2) a comparison of initialisation strategies for different dimensionality reduction algorithms to generate a series of layouts for corpora that grow and evolve overtime; (3) and an algorithm that updates existing layouts by incorporating user feedback provided by pointwise drag-and-drop edits. This thesis also contains system prototypes to demonstrate the proposed technologies, including pre-processing and layout of the data and presentation in interactive user interfaces.
In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress.
In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling.
The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.
Deep geological repositories represent a promising solution for the final disposal of nuclear waste. Due to its low permeability, high sorption capacity and self-sealing potential, Opalinus Clay (OPA) is considered a suitable host rock formation for the long-term storage of nuclear waste in Switzerland and Germany. However, the clay formation is characterized by compositional and structural variabilities including the occurrence of carbonate- and quartz-rich layers, pronounced bedding planes as well as tectonic elements such as pre-existing fault zones and fractures, suggesting heterogeneous rock mass properties.
Characterizing the heterogeneity of host rock properties is therefore essential for safety predictions of future repositories. This includes a detailed understanding of the mechanical and hydraulic properties, deformation behavior and the underlying deformation processes for an improved assessment of the sealing integrity and long-term safety of a deep repository in OPA. Against this background, this thesis presents the results of deformation experiments performed on intact and artificially fractured specimens of the quartz-rich, sandy and clay-rich, shaly facies of OPA. The experiments focus on the influence of mineralogical composition on the deformation behavior as well as the reactivation and sealing properties of pre-existing faults and fractures at different boundary conditions (e.g., pressure, temperature, strain rate).
The anisotropic mechanical properties of the sandy facies of OPA are presented in the first section, which were determined from triaxial deformation experiments using dried and resaturated samples loaded at 0°, 45° and 90° to the bedding plane orientation. A Paterson-type deformation apparatus was used that allowed to investigate how the deformation behavior is influenced by the variation of confining pressure (50 – 100 MPa), temperature (25 – 200 °C), and strain rate (1 × 10-3 – 5 × 10-6 s-1). Constant strain rate experiments revealed brittle to semi-brittle deformation behavior of the sandy facies at the applied conditions. Deformation behavior showed a strong dependence on confining pressure, degree of water saturation as well as bedding orientation, whereas the variation of temperature and strain rate had no significant effect on deformation. Furthermore, the sandy facies displays higher strength and stiffness compared to the clay-rich shaly facies deformed at similar conditions by Nüesch (1991). From the obtained results it can be concluded that cataclastic mechanisms dominate the short-term deformation behavior of dried samples from both facies up to elevated pressure (<200 MPa) and temperature (<200 °C) conditions.
The second part presents triaxial deformation tests that were performed to investigate how structural discontinuities affect the deformation behavior of OPA and how the reactivation of preexisting faults is influenced by mineral composition and confining pressure. To this end, dried cylindrical samples of the sandy and shaly facies of OPA were used, which contained a saw-cut fracture oriented at 30° to the long axis. After hydrostatic pre-compaction at 50 MPa, constant strain rate deformation tests were performed at confining pressures of 5, 20 or 35 MPa. With increasing confinement, a gradual transition from brittle, highly localized fault slip including a stress drop at fault reactivation to semi-brittle deformation behavior, characterized by increasing delocalization and non-linear strain hardening without dynamic fault reactivation, can be observed. Brittle localization was limited by the confining pressure at which the fault strength exceeded the matrix yield strength, above which strain partitioning between localized fault slip and distributed matrix deformation occurred. The sandy facies displayed a slightly higher friction coefficient (≈0.48) compared to the shaly facies (≈0.4). In addition, slide-hold-slide tests were conducted, revealing negative or negligible frictional strengthening, which suggests stable creep and long-term weakness of faults in both facies of OPA. The conducted experiments demonstrate that dilatant brittle fault reactivation in OPA may be favored at high overconsolidation ratios and shallow depths, increasing the risk of seismic hazard and the creation of fluid pathways.
The final section illustrates how the sealing capacity of fractures in OPA is affected by mineral composition. Triaxial flow-through experiments using Argon-gas were performed with dried samples from the sandy and shaly facies of OPA containing a roughened, artificial fracture. Slate, graywacke, quartzite, natural fault gouge, and granite samples were also tested to highlight the influence of normal stress, mineralogy and diagenesis on the sustainability of fracture transmissivity. With increasing normal stress, a non-linear decrease of fracture transmissivity can be observed that resulted in a permanent reduction of transmissivity after stress release. The transmissivity of rocks with a high portion of strong minerals (e.g., quartz) and high unconfined compressive strength was less sensitive to stress changes. In accordance with this, the sandy facies of OPA displayed a higher initial transmissivity that was less sensitive to stress changes compared to the shaly facies. However, transmissivity of rigid slate was less sensitive to stress changes than the sandy facies of OPA, although the slate is characterized by a higher phyllosilicate content. This demonstrates that in addition to mineral composition, other factors such as the degree of metamorphism, cementation and consolidation have to be considered when evaluating the sealing capacity of phyllosilicate-rich rocks.
The results of this thesis highlighted the role of confining pressure on the failure behavior of intact and artificially fractured OPA. Although the quartz-rich sandy facies may be considered as being more favorable for underground constructions due to its higher shear strength and stiffness than the shaly facies, the results indicate that when fractures develop in the sandy facies, they are more conductive and remain more permeable compared to fractures in the clay-dominated shaly facies at a given stress. The results may provide the basis for constitutive models to predict the integrity and evolution of a future repository. Clearly, the influence of composition and consolidation, e.g., by geological burial and uplift, on the mechanical sealing behavior of OPA highlights the need for a detailed site-specific material characterization for a future repository.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
On January 1, 2015, Germany introduced a general statutory minimum wage of €8.50 gross per hour. This thesis analyses the effects of the minimum wage introduction in Germany as well as wage floors in the European context, contributing to national and international research.
The second chapter of this dissertation summarizes the short-run effects of the minimum wage reform found in previous studies.
We show that the introduction of the minimum wage had a positive effect on wages at the bottom of the distribution. Yet, there was still a significant amount of non-compliance shortly after the reform. Additionally, previous evidence points to small negative employment effects mainly driven by a reduction in mini-jobs. Contrary to expectations, though, there were no effects on poverty and general inequality found in the short run. This is mostly due to the fact that working hours were reduced and the increase of hourly wages was therefore not reflected in monthly wages.
The third chapter identifies whether the job losses predicted in ex-ante studies materialized in the short run and, if so, which type of employment was affected the most. To identify the effects, this chapter (as well as chapter four) uses a regional difference-in-difference approach to estimate the effects on regular employment (part- and full-time) and mini-jobs.
Our results suggest that the minimum wage has slightly reduced overall employment, mainly due to a decline in mini-jobs.
The fourth chapter has the same methodological approach as the previous one. Its motivated by the fact that women are often overrepresented among low-wage employees. Thus, the primary research question in this chapter is whether the minimum wage has led to a narrowing of the gender wage gap. In order to answer that, we identify the effects on the wage gap at the 10th and 25th percentiles and at the mean of the underlying gender-specific wage distributions. Our results imply that for eligible employees the gender wage gap at the 10th percentile decreased by 4.6 percentage points between 2014 and 2018 in high-bite regions compared to low-bite regions. We estimate this to be a reduction of 32% compared to 2014. Higher up the distribution – i.e. at the 25th percentile and the mean – the effects are smaller and not as robust.
The fifth chapter keeps the gender-specific emphasis on minimum wage effects. However, in contrast to the rest of the dissertation, it widens the scope to other European Union countries. Following the rationale of the previous chapter, women could potentially benefit particularly from a minimum wage. However, they could also be more prone to suffer from the possibly induced job losses or reductions in working hours. Therefore, this chapter summarizes existing evidence from EU member states dealing with the relationship between wage floors and the gender wage gap. In addition, it provides a systematic summary of studies that examine the impact of minimum wages on employment losses or changes in working hours that particularly affect women. The evidence shows that higher wage floors are often associated with smaller gender wage gaps. With respect to employment, women do not appear to experience greater employment losses than men per se. However, studies show that the minimum wage has a particular impact on part-time workers. Therefore, it cannot be ruled out that the negative correlation between the minimum wage and the gender wage gap is related to the job losses of these lower-paid, often female, part-time workers. This working arrangement should therefore be specially focused on in the context of minimum wages.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
The current generation of ground-based instruments has rapidly extended the limits of the range accessible to us with very-high-energy (VHE) gamma-rays, and more than a hundred sources have now been detected in the Milky Way. These sources represent only the tip of the iceberg, but their number has reached a level that allows population studies. In this work, a model of the global population of VHE gamma-ray sources based on the most comprehensive census of Galactic sources in this energy regime, the H.E.S.S. Galactic plane survey (HGPS), will be presented. A population synthesis approach was followed in the construction of the model. Particular attention was paid to correcting for the strong observational bias inherent in the sample of detected sources. The methods developed for estimating the model parameters have been validated with extensive Monte Carlo simulations and will be shown to provide unbiased estimates of the model parameters. With these methods, five models for different spatial distributions of sources have been constructed. To test the validity of these models, their predictions for the composition of sources within the sensitivity range of the HGPS are compared with the observed sample. With one exception, similar results are obtained for all spatial distributions, showing that the observed longitude profile and the source distribution over photon flux are in fair agreement with observation. Regarding the latitude profile and the source distribution over angular extent, it becomes apparent that the model needs to be further adjusted to bring its predictions in agreement with observation. Based on the model, predictions of the global properties of the Galactic population of VHE gamma-ray sources and the prospects of the Cherenkov Telescope Array (CTA) will be presented.
CTA will significantly increase our knowledge of VHE gamma-ray sources by lowering the threshold for source detection, primarily through a larger detection area compared to current-generation instruments. In ground-based gamma-ray astronomy, the sensitivity of an instrument depends strongly, in addition to the detection area, on the ability to distinguish images of air showers produced by gamma-rays from those produced by cosmic rays, which are a strong background. This means that the number of detectable sources depends on the background rejection algorithm used and therefore may also be increased by improving the performance of such algorithms. In this context, in addition to the population model, this work presents a study on the application of deep-learning techniques to the task of gamma-hadron separation in the analysis of data from ground-based gamma-ray instruments. Based on a systematic survey of different neural-network architectures, it is shown that robust classifiers can be constructed with competitive performance compared to the best existing algorithms. Despite the broad coverage of neural-network architectures discussed, only part of the potential offered by the
application of deep-learning techniques to the analysis of gamma-ray data is exploited in the context of this study. Nevertheless, it provides an important basis for further research on this topic.
Macrophages play an integral role for the innate immune system. It is critically important for basic research and therapeutic applications to find approaches to potentially modulate their function as the first line of defense. Transient genetic engineering via delivery of synthetic mRNA can serve for such purposes as a robust, reliable and safe technology to modulate macrophage functions. However, a major drawback particularly in the transfection of sensitive immune cells such as macrophages is the immunogenicity of exogenous IVT-mRNAs. Consequently, the direct modulation of human macrophage activity by mRNA-mediated genetic engineering was the aim of this work. The synthetic mRNA can instruct macrophages to synthesize specific target proteins, which can steer macrophage activity in a tailored fashion. Thus, the focus of this dissertation was to identify parameters triggering unwanted immune activation of macrophages, and to find approaches to minimize such effects. When comparing different carrier types as well as mRNA chemistries, the latter had unequivocally a more pronounced impact on activation of human macrophages and monocytes. Exploratory investigations revealed that the choice of nucleoside chemistry, particularly of modified uridine, plays a crucial role for IVT-mRNA-induced immune activation, in a dose-dependent fashion. Additionally, the contribution of the various 5’ cap structures tested was only minor. Moreover, to address the technical aspects of the delivery of multiple genes as often mandatory for advanced gene delivery studies, two different strategies of payload design were investigated, namely “bicistronic” delivery and “monocistronic” co-delivery. The side-by-side comparison of mRNA co-delivery via a bicistronic design (two genes, one mRNA) with a monocistronic design (two gene, two mRNAs) unexpectedly revealed that, despite the intrinsic equimolar nature of the bicistronic approach, it was outperformed by the monocistronic approach in terms of reliable co-expression when quantified on the single cell level. Overall, the incorporation of chemical modifications into IVT-mRNA by using respective building blocks, primarily with the aim to minimize immune activation as exemplified in this thesis, has the potential to facilitate the selection of the proper mRNA chemistry to address specific biological and clinical challenges. The technological aspects of gene delivery evaluated and validated by the quantitative methods allowed us to shed light on crucial process parameters and mRNA design criteria, required for reliable co-expression schemes of IVT-mRNA delivery.
Plants can be primed to survive the exposure to a severe heat stress (HS) by prior exposure to a mild HS. The information about the priming stimulus is maintained by the plant for several days. This maintenance of acquired thermotolerance, or HS memory, is genetically separable from the acquisition of thermotolerance itself and several specific regulatory factors have been identified in recent years.
On the molecular level, HS memory correlates with two types of transcriptional memory, type I and type II, that characterize a partially overlapping subset of HS-inducible genes. Type I transcriptional memory or sustained induction refers to the sustained transcriptional induction above non-stressed expression levels of a gene for a prolonged time period after the end of the stress exposure. Type II transcriptional memory refers to an altered transcriptional response of a gene after repeated exposure to a stress of similar duration and intensity. In particular, enhanced re-induction refers to a transcriptional pattern in which a gene is induced to a significantly higher degree after the second stress exposure than after the first.
This thesis describes the functional characterization of a novel positive transcriptional regulator of type I transcriptional memory, the heat shock transcription factor HSFA3, and compares it to HSFA2, a known positive regulator of type I and type II transcriptional memory. It investigates type I transcriptional memory and its dependence on HSFA2 and HSFA3 for the first time on a genome-wide level, and gives insight on the formation of heteromeric HSF complexes in response to HS. This thesis confirms the tight correlation between transcriptional memory and H3K4 hyper-methylation, reported here in a case study that aimed to reduce H3K4 hyper-methylation of the type II transcriptional memory gene APX2 by CRISPR/dCas9-mediated epigenome editing. Finally, this thesis gives insight into the requirements for a heat shock transcription factor to function as a positive regulator of transcriptional memory, both in terms of its expression profile and protein abundance after HS and the contribution of individual functional domains.
In summary, this thesis contributes to a more detailed understanding of the molecular processes underlying transcriptional memory and therefore HS memory, in Arabidopsis thaliana.
The world energy consumption has constantly increased every year due to economic development and population growth. This inevitably caused vast amount of CO2 emission, and the CO2 concentration in the atmosphere keeps increasing with economic growth. To reduce CO2 emission, various methods have been developed but there are still many bottlenecks to be solved. Solvents easily absorbing CO2 such as monoethanol-amine (MEA) and diethanolamine, for example, have limitations of solvent loss, amine degradation, vulnerability to heat and toxicity, and the high cost of regeneration which is especially caused due to chemisorption process. Though some of these drawbacks can be compensated through physisorption with zeolites and metal-organic frameworks (MOFs) by displaying significant adsorption selectivity and capacity even in ambient conditions, limitations for these materials still exist. Zeolites demand relatively high regeneration energy and have limited adsorption kinetics due to the exceptionally narrow pore structure. MOFs have low stability against heat and moisture and high manufacturing cost.
Nanoporous carbons have recently received attention as an attractive functional porous material due to their unique properties. These materials are crucial in many applications of modern science and industry such as water and air purification, catalysis, gas separation, and energy storage/conversion due to their high chemical and thermal stability, and in particular electronic conductivity in combination with high specific surface areas. Nanoporous carbons can be used to adsorb environmental pollutants or small gas molecules such as CO2 and to power electrochemical energy storage devices such as batteries and fuel cells. In all fields, their pore structure or electrical properties can be modified depending on their purposes.
This thesis provides an in-depth look at novel nanoporous carbons from the synthetic and the application point of view. The interplay between pore structure, atomic construction, and the adsorption properties of nanoporous carbon materials are investigated. Novel nanoporous carbon materials are synthesized by using simple precursor molecules containing heteroatoms through a facile
templating method. The affinity, and in turn the adsorption capacity, of carbon materials toward polar gas molecules (CO2 and H2O) is enhanced by the modification of their chemical construction. It is also shown that these properties are important in electrochemical energy storage, here especially for supercapacitors with aqueous electrolytes which are basically based on the physisorption of ions on carbon surfaces. This shows that nanoporous carbons can be a “functional” material with specific physical or chemical interactions with guest species just like zeolites and MOFs.
The synthesis of sp2-conjugated materials with high heteroatom content from a mixture of citrazinic acid and melamine in which heteroatoms are already bonded in specific motives is illustrated. By controlling the removal procedure of the salt-template and the condensation temperature, the role of salts in the formation of porosity and as coordination sites for the stabilization of heteroatoms is proven. A high amount of nitrogen of up to 20 wt. %, oxygen contents of up to 19 wt.%, and a high CO2/N2 selectivity with maximum CO2 uptake at 273 K of 5.31 mmol g–1 are achieved. Besides, the further controlled thermal condensation of precursor molecules and advanced functional properties on applications of the synthesized porous carbons are described. The materials have different porosity and atomic construction exhibiting a high nitrogen content up to 25 wt. % as well as a high porosity with a specific surface area of more than 1800 m2 g−1, and a high performance in selective CO2 gas adsorption of 62.7. These pore structure as well as properties of surface affect to water adsorption with a remarkably high Qst of over 100 kJ mol−1 even higher than that of zeolites or CaCl2 well known as adsorbents. In addition to that, the pore structure of HAT-CN-derived carbon materials during condensation in vacuum is fundamentally understood which is essential to maximize the utilization of porous system in materials showing significant difference in their pore volume of 0.5 cm3 g−1 and 0.25 cm3 g−1 without and with vacuum, respectively.
The molecular designs of heteroatom containing porous carbon derived from abundant and simple molecules are introduced in the presented thesis. Abundant precursors that already containing high amount of nitrogen or oxygen are beneficial to achieve enhanced interaction with adsorptives. The physical and chemical properties of these heteroatom-doped porous carbons are affected by mainly two parameters, that is, the porosity from the pore structure and the polarity from the atomic composition on the surface. In other words, controlling the porosity as well as the polarity of the carbon materials is studied to understand interactions with different guest species which is a fundamental knowledge for the utilization on various applications.
Proteine erfüllen bei einer Vielzahl von Prozessen eine essenzielle Rolle. Um diese Funktionsweisen zu verstehen, bedarf es der Aufklärung derer Struktur und deren Bindungsverhaltens mit anderen Molekülen wie Proteinen, Peptiden, Kohlenhydraten oder kleinen Molekülen. Im ersten Teil dieser Arbeit wurden der Wildtyp und die Punktmutante N126W eines Kohlenhydrat-bindenden Proteins aus dem hitzestabilen Bakterium C. thermocellum untersucht, welches Teil eines Komplexes ist, der Kohlenhydrate wie Cellulose erkennen, binden und abbauen kann. Dazu wurde dieses Protein mit E.coli Bakterien hergestellt und durch Metallchelat- und Größenausschlusschromatographie gereinigt. Die Proteine konnten isotopenmarkiert mittels Kernspinresonanz-Spektroskopie (NMR) untersucht werden. H/D-Austauschexperimente zeigten leicht und schwer zugängliche Stellen im Protein für eine mögliche Ligandenwechselwirkung. Anschließend konnte eine Interaktion beider Proteine mit Cellulosefragmenten festgestellt werden. Diese interagieren über zwischenmolekulare Kräfte mit den Seitenketten von aromatischen Aminosäuren und über Wasserstoffbrückenbindungen mit anderen Resten. Weiterhin wurde die Calcium-Bindestelle analysiert und es konnte gezeigt werden, das diese nach der Proteinherstellung mit einem Calcium-Ion besetzt ist und dieses mit dem Komplexbildner EDTA entfernbar ist, jedoch wieder reversibel besetzt werden kann. Zum Schluss wurde mittels zweier Methoden versucht (grafting from und grafting to), das Protein mit einem temperatursensorischen Polymer (Poly-N-Isopropylacrylamid) zu koppeln, um so Eigenschaften wie Löslichkeit oder Stabilität zu beeinflussen. Es zeigte sich, das während die grafting from Methode (Polymer wächst direkt vom Protein) zu einer teilweisen Entfaltung und Destabilisierung des Proteins führte, bei der grafting to Methode (Polymer wird separat hergestellt und dann an das Protein gekoppelt) das Protein seine Stabilität behielt und nur wenige Polymerketten angebaut waren. Der zweite Teil dieser Arbeit beschäftigte sich mit der Interaktion von zwei LIM-Domänen des Proteins Paxillin und der zytoplasmatischen Domäne der Peptide Integrin-β1 und Integrin-β3. Diese spielen eine wichtige Rolle bei der Bewegung von Zellen. Dabei interagieren sie mit einer Vielzahl an anderen Proteinen, um fokale Adhäsionen (Multiproteinkomplexe) zu bilden. Bei der Herstellung des Peptids Integrin-β3 zeigte sich durch Größenausschlusschromatographie und Massenspektrometrie ein Abbau, bei dem verschiedene Aminosäuregruppen abgespalten werden. Dieser konnte durch eine Zugabe des Serinprotease-Inhibitors AEBSF verhindert werden. Anschließend wurde die direkte Interaktion der Proteine untereinander mittels NMR untersucht. Dabei zeigte sich, das Integrin-β1 und Integrin-β3 an die gleiche Position binden, nämlich an den flexiblen Loop der LIM3-Domäne von Paxillin. Die Dissoziationskonstanten zeigten, dass Integrin-β1 mit einer zirka zehnfach höheren Affinität im Vergleich zu Integrin-β3 an Paxillin bindet. Während Paxillins Bindestelle an Integrin-β1 in der Mitte des Peptids liegt, ist bei Integrin-β3 der C-Terminus essenziell. Daher wurden die drei C-terminalen Aminosäuren entfernt und erneut Bindungsstudien durchgeführt, welche gezeigt haben, das die Affinität dadurch fast vollständig unterbunden wurde. Final wurde der flexible Loop der LIM3-Domäne in zwei andere Aminosäuresequenzen mutiert, um die Bindung auf der Paxillin-Seite auszulöschen. Jedoch zeigten sowohl Zirkulardichroismus-Spektroskopie als auch NMR-Spektroskopie, dass die Mutationen zu einer teilweisen Entfaltung der Domäne geführt haben und somit nicht als geeignete Kandidaten für diese Studien identifiziert werden konnten.
Nation, migration, narration
(2022)
En France et en Allemagne, l’immigration est devenue dans les dernières décennies une problématique centrale. C’est dans ce contexte qu’est apparu le rap. Celui-ci connaît une popularité énorme chez les populations issues de l’immigration. Pour autant, les rappeurs ne s’en confrontent pas moins à leur identité française ou allemande.
Le but de ce travail est d’expliquer cette apparente contradiction : comment des personnes issues de l’immigration, exprimant un mal-être face à un racisme qu’ils considèrent omniprésent, peuvent-elles se sentir pleinement françaises / allemandes ?
On a divisé le travail entre les chapitres suivants : Contexte de l'étude, méthodologie et théories (I) ; Analyse des différentes formes d’identité nationale au prisme du corpus (II) ; Analyse en trois étapes chronologiques du rapport à la société dans les textes des rappeurs (III-V) ; étude de cas de Kery James en France et Samy Deluxe en Allemagne (VI).
The post-antiretroviral therapy era has transformed HIV into a chronic disease and non-HIV comorbidities (i.e., cardiovascular and mental diseases) are more prevalent in PLWH. The source of these non-HIV comorbidities aside from traditional risk factor include HIV infection, inflammation, distorted immune activation, burden of chronic diseases, and unhealthy lifestyle like sedentarism. Exercise is known for its beneficial effects in mental and physical health; reasons why exercise is recommended to prevent and treat difference cardiovascular and mental diseases in the general population. This cumulative thesis aimed to comprehend the relation exercise has to non-HIV comorbidities in German PLWH. Four studies were conducted to 1) understand exercise effects in cardiorespiratory fitness and muscle strength on PLWH through a systematic review and meta-analyses and 2) determine the likelihood of German PLWH developing non-HIV comorbidities, in a cross-sectional study. Meta-analytic examination indicates PLWH cardiorespiratory fitness (VO2max SMD = 0.61 ml·kg·min-1, 95% CI: 0.35-0.88, z = 4.47, p < 0.001, I2 = 50%) and strength (of remark lowerbody strength by 16.8 kg, 95% CI: 13–20.6, p< 0.001) improves after an exercise intervention in comparison to a control group. Cross-sectional data suggest exercise has a positive effect on German PLWH mental health (less anxiety and depressive symptoms) and protects against the development of anxiety (PR: 0.57, 95%IC: 0.36 – 0.91, p = 0.01) and depression (PR: 0.62, 95%IC: 0.41 – 0.94, p = 0.01). Likewise, exercise duration is related to a lower likelihood of reporting heart arrhythmias (PR: 0.20, 95%IC: 0.10 – 0.60, p < 0.01) and exercise frequency to a lower likelihood of reporting diabetes mellitus (PR: 0.40, 95%IC: 0.10 – 1, p < 0.01) in German PLWH. A preliminary recommendation for German PLWH who want to engage in exercise can be to exercise ≥ 1 time per week, at an intensity of 5 METs per session or > 103 MET·min·day-1, with a duration ≥ 150 minutes per week. Nevertheless, further research is needed to comprehend exercise dose response and protective effect for cardiovascular diseases, anxiety, and depression in German PLWH.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful.
However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis.
Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes.
Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds.
Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis.
Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes.
Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction.
Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.
Due to the major role of greenhouse gas emissions in global climate change, the development of non-fossil energy technologies is essential. Deep geothermal energy represents such an alternative, which offers promising properties such as a high base load capability and a large untapped potential. The present work addresses barite precipitation within geothermal systems and the associated reduction in rock permeability, which is a major obstacle to maintaining high efficiency. In this context, hydro-geochemical models are essential to quantify and predict the effects of precipitation on the efficiency of a system.
The objective of the present work is to quantify the induced injectivity loss using numerical and analytical reactive transport simulations. For the calculations, the fractured-porous reservoirs of the German geothermal regions North German Basin (NGB) and Upper Rhine Graben (URG) are considered.
Similar depth-dependent precipitation potentials could be determined for both investigated regions (2.8-20.2 g/m3 fluid). However, the reservoir simulations indicate that the injectivity loss due to barite deposition in the NGB is significant (1.8%-6.4% per year) and the longevity of the system is affected as a result; this is especially true for deeper reservoirs (3000 m). In contrast, simulations of URG sites indicate a minor role of barite (< 0.1%-1.2% injectivity loss per year). The key differences between the investigated regions are reservoir thicknesses and the presence of fractures in the rock, as well as the ionic strength of the fluids. The URG generally has fractured-porous reservoirs with much higher thicknesses, resulting in a greater distribution of precipitates in the subsurface. Furthermore, ionic strengths are higher in the NGB, which accelerates barite precipitation, causing it to occur more concentrated around the wellbore. The more concentrated the precipitates occur around the wellbore, the higher the injectivity loss.
In this work, a workflow was developed within which numerical and analytical models can be used to estimate and quantify the risk of barite precipitation within the reservoir of geothermal systems. A key element is a newly developed analytical scaling score that provides a reliable estimate of induced injectivity loss. The key advantage of the presented approach compared to fully coupled reservoir simulations is its simplicity, which makes it more accessible to plant operators and decision makers. Thus, in particular, the scaling score can find wide application within geothermal energy, e.g., in the search for potential plant sites and the estimation of long-term efficiency.
Organic solar cells offer an efficient and cost-effective alternative for solar energy harvesting. This type of photovoltaic cell typically consists of a blend of two organic semiconductors, an electron donating polymer and a low molecular weight electron acceptor to create what is known as a bulk heterojunction (BHJ) morphology. Traditionally, fullerene-based acceptors have been used for this purpose. In recent years, the development of new acceptor molecules, so-called non-fullerene acceptors (NFA), has breathed new life into organic solar cell research, enabling record efficiencies close to 19%. Today, NFA-based solar cells are approaching their inorganic competitors in terms of photocurrent generation, but lag in terms of open circuit voltage (V_OC). Interestingly, the V_OC of these cells benefits from small offsets of orbital energies at the donor-NFA interface, although previous knowledge considered large energy offsets to be critical for efficient charge carrier generation. In addition, there are several other electronic and structural features that distinguish NFAs from fullerenes.
My thesis focuses on understanding the interplay between the unique attributes of NFAs and the physical processes occurring in solar cells. By combining various experimental techniques with drift-diffusion simulations, the generation of free charge carriers as well as their recombination in state-of-the-art NFA-based solar cells is characterized. For this purpose, solar cells based on the donor polymer PM6 and the NFA Y6 have been investigated. The generation of free charge carriers in PM6:Y6 is efficient and independent of electric field and excitation energy. Temperature-dependent measurements show a very low activation energy for photocurrent generation (about 6 meV), indicating barrierless charge carrier separation. Theoretical modeling suggests that Y6 molecules have large quadrupole moments, leading to band bending at the donor-acceptor interface and thereby reducing the electrostatic Coulomb dissociation barrier. In this regard, this work identifies poor extraction of free charges in competition with nongeminate recombination as a dominant loss process in PM6:Y6 devices. Subsequently, the spectral characteristics of PM6:Y6 solar cells were investigated with respect to the dominant process of charge carrier recombination. It was found that the photon emission under open-circuit conditions can be almost entirely attributed to the occupation and recombination of Y6 singlet excitons. Nevertheless, the recombination pathway via the singlet state contributes only 1% to the total recombination, which is dominated by the charge transfer state (CT-state) at the donor-acceptor interface. Further V_OC gains can therefore only be expected if the density and/or recombination rate of these CT-states can be significantly reduced. Finally, the role of energetic disorder in NFA solar cells is investigated by comparing Y6 with a structurally related derivative, named N4. Layer morphology studies combined with temperature-dependent charge transport experiments show significantly lower structural and energetic disorder in the case of the PM6:Y6 blend. For both PM6:Y6 and PM6:N4, disorder determines the maximum achievable V_OC, with PM6:Y6 benefiting from improved morphological order. Overall, the obtained findings point to avenues for the realization of NFA-based solar cells with even smaller V_OC losses. Further reduction of nongeminate recombination and energetic disorder should result in organic solar cells with efficiencies above 20% in the future.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.