Refine
Has Fulltext
- yes (88) (remove)
Year of publication
- 2024 (88) (remove)
Document Type
- Doctoral Thesis (88) (remove)
Language
- English (88) (remove)
Is part of the Bibliography
- yes (88)
Keywords
- Arctic (4)
- Arktis (4)
- Klimawandel (4)
- climate change (4)
- Geophysik (3)
- Kohlenstoff (3)
- Satzverarbeitung (3)
- carbon (3)
- geophysics (3)
- sentence processing (3)
Institute
- Institut für Biochemie und Biologie (15)
- Institut für Physik und Astronomie (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Extern (12)
- Institut für Geowissenschaften (10)
- Institut für Umweltwissenschaften und Geographie (7)
- Department Linguistik (6)
- Institut für Chemie (5)
- Department Psychologie (2)
- Digital Engineering Fakultät (2)
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
Background: Societies worldwide have become more diverse yet continue to be inequitable. Understanding how youth growing up in these societies are socialized and consequently develop racial knowledge has important implications not only for their well-being but also for building more just societies. Importantly, there is a lack of research on these topics in Germany and Europe in general.
Aim and Method: The overarching aim of the dissertation is to investigate 1) where and how ethnic-racial socialization (ERS) happens in inequitable societies and 2) how it relates to youth’s development of racial knowledge, which comprises racial beliefs (e.g., prejudice, attitudes), behaviors (e.g., actions preserving or disrupting inequities), and identities (e.g., inclusive, cultural). Guided by developmental, cultural, and ecological theories of socialization and development, I first explored how family, as a crucial socialization context, contributes to the preservation or disruption of racism and xenophobia in inequitable societies through its influence on children’s racial beliefs and behaviors. I conducted a literature review and developed a conceptual model bridging research on ethnic-racial socialization and intergroup relations (Study 1). After documenting the lack of research on socialization and development of racial knowledge within and beyond family contexts outside of the U.S., I conducted a qualitative study to explore ERS in Germany through the lens of racially marginalized youth (Study 2). Then, I conducted two quantitative studies to explore the separate and interacting relations of multiple (i.e., family, school) socialization contexts for the development of racial beliefs and behaviors (Study 3), and identities (Studies 3, 4) in Germany. Participants of Study 2 were 26 young adults (aged between 19 and 32) of Turkish, Kurdish, East, and Southeast Asian heritage living across different cities in Germany. Study 3 was conducted with 503 eighth graders of immigrant and non-immigrant descent (Mage = 13.67) in Berlin, Study 4 included 311 early to mid-adolescents of immigrant descent (Mage= 13.85) in North Rhine-Westphalia with diverse cultural backgrounds.
Results and Conclusion: The findings revealed that privileged or marginalized positions of families in relation to their ethnic-racial and religious background in society entail differential experiences and thus are an important determining factor for the content/process of socialization and development of youth’s racial knowledge. Until recently, ERS research mostly focused on investigating how racially marginalized families have been the sources of support for their children in resisting racism and how racially privileged families contribute to transmission of information upholding racism (Study 1). ERS for racially marginalized youth in Germany centered heritage culture, discrimination, and resistance strategies to racism, yet resistance strategies transmitted to youth mostly help to survive racism (e.g., working hard) by upholding it instead of liberating themselves from racism by disrupting it (e.g., self-advocacy, Study 2). Furthermore, when families and schools foster heritage and intercultural learning, both contexts may separately promote stronger identification with heritage culture and German identities, and more prosocial intentions towards disadvantaged groups (i.e., refugees) among youth (Studies 3, 4). However, equal treatment in the school context led to mixed results: equal treatment was either unrelated to inclusive identity, or positively related to German and negatively related to heritage culture identities (Studies 3, 4). Additionally, youth receiving messages highlighting strained and preferential intergroup relations at home while attending schools promoting assimilation may develop a stronger heritage culture identity (Study 4). In conclusion, ERS happened across various social contexts (i.e., family, community centers, school, neighborhood, peer). ERS promoting heritage and intercultural learning, at least in one social context (family or school), might foster youth’s racial knowledge manifesting in stronger belonging to multiple cultures and in prosocial intentions toward disadvantaged groups. However, there is a need for ERS targeting increasing awareness of discrimination across social contexts of youth and teaching youth resistance strategies for liberation from racism.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
The experience of premenstrual syndrome (PMS) affects up to 90% of individuals with an active menstrual cycle and involves a spectrum of aversive physiological and psychological symptoms in the days leading up to menstruation (Tschudin et al., 2010). Despite its high prevalence, the precise origins of PMS remain elusive, with influences ranging from hormonal fluctuations to cognitive, social, and cultural factors (Hunter, 2007; Matsumoto et al., 2013).
Biologically, hormonal fluctuations, particularly in gonadal steroids, are commonly believed to be implicated in PMS, with the central factor being varying susceptibilities to the fluctuations between individuals and cycles (Rapkin & Akopians, 2012). Allopregnanolone (ALLO), a neuroactive steroid and progesterone metabolite, has emerged as a potential link to PMS symptoms (Hantsoo & Epperson, 2020). ALLO is a positive allosteric modulator of the GABAA receptor, influencing inhibitory communication (Rupprecht, 2003; Andréen et al., 2006). Different susceptibility to ALLO fluctuations throughout the cycle may lead to reduced GABAergic signal transmission during the luteal phase of the menstrual cycle.
The GABAergic system's broad influence leads to a number of affected physiological systems, including a consistent reduction in vagally mediated heart rate variability (vmHRV) during the luteal phase (Schmalenberger et al., 2019). This reduction in vmHRV is more pronounced in individuals with high PMS symptoms (Baker et al., 2008; Matsumoto et al., 2007). Fear conditioning studies have shown inconsistent associations with cycle phases, suggesting a complex interplay between physiological parameters and PMS-related symptoms (Carpenter et al., 2022; Epperson et al., 2007; Milad et al., 2006).
The neurovisceral integration model posits that vmHRV reflects the capacity of the central autonomous network (CAN), which is responsible for regulatory processes on behavioral, cognitive, and autonomous levels (Thayer & Lane, 2000, 2009). Fear learning, mediated within the CAN, is suggested to be indicative of vmHRV's capacity for successful
VI
regulation (Battaglia & Thayer, 2022). Given the GABAergic mediation of central inhibitory functional connectivity in the CAN, which may be affected by ALLO fluctuations, this thesis proposes that fluctuating CAN activity in the luteal phase contributes to diverse aversive symptoms in PMS.
A research program was designed to empirically test these propositions. Study 1 investigated fear discrimination during different menstrual cycle phases and its interaction with vmHRV, revealing nuanced effects on acoustic startle response and skin conductance response. While there was heightened fear discrimination in acoustic startle responses in participants in the luteal phase, there was an interaction between menstrual cycle phase and vmHRV in skin conductance responses. In this measure, heightened fear discrimination during the luteal phase was only visible in individuals with high resting vmHRV; those with low vmHRV showed reduced fear discrimination and higher overall responses.
Despite affecting the vast majority of menstruating people, there are very limited tools available to reliably assess these symptoms in the German speaking area. Study 2 aimed at closing this gap, by translating and validating a German version of the short version of the Premenstrual Assessment Form (Allen et al., 1991), providing a reliable tool for future investigations, which closes the gap in PMS questionnaires in the German-speaking research area.
Study 3 employed a diary study paradigm to explore daily associations between vmHRV and PMS symptoms. The results showed clear simultaneous fluctuations between the two constructs with a peak in PMS and a low point in vmHRV a few days before menstruation onset. The association between vmHRV and PMS was driven by psychological PMS symptoms.
Based on the theoretical considerations regarding the neurovisceral perspective on PMS, another interesting construct to consider is attentional control, as it is closely related to functions of the CAN. Study 4 delved into attentional control and vmHRV differences between menstrual cycle phases, demonstrating an interaction between cycle phase and PMS symptoms. In a pilot, we found reduced vmHRV and attentional control during the luteal phase only in participants who reported strong PMS.
While Studies 1-4 provided evidence for the mechanisms underlying PMS, Studies 5 and 6 investigated short- and long-term intervention protocols to ameliorate PMS symptomatology. Study 5 explored the potential of heart rate variability biofeedback (HRVB) in alleviating PMS symptoms and a number of other outcome measures. In a waitlist-control design, participants underwent a 4-week smartphone-based HRVB intervention. The results revealed positive effects on PMS, with larger effect sizes on psychological symptoms, as well as on depressive symptoms, anxiety/stress and attentional control.
Finally, Study 6 examined the acute effects of HRVB on attentional control. The study found positive impact but only in highly stressed individuals.
The thesis, based on this comprehensive research program, expands our understanding of PMS as an outcome of CAN fluctuations mediated by GABAA receptor reactivity. The results largely support the model. These findings not only deepen our understanding of PMS but also offer potential avenues for therapeutic interventions. The promising results of smartphone-based HRVB training suggest a non-pharmacological approach to managing PMS symptoms, although further research is needed to confirm its efficacy.
In conclusion, this thesis illuminates the complex web of factors contributing to PMS, providing valuable insights into its etiological underpinnings and potential interventions. By elucidating the relationships between hormonal fluctuations, CAN activity, and psychological responses, this research contributes to more effective treatments for individuals grappling with the challenges of PMS. The findings hold promise for improving the quality of life for those affected by this prevalent and often debilitating condition.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
Due to their sessile lifestyle, plants are constantly exposed to pathogens and possess a multi-layered immune system that prevents infection. The first layer of immunity called pattern-triggered immunity (PTI), enables plants to recognise highly conserved molecules that are present in pathogens, resulting in immunity from non-adaptive pathogens. Adapted pathogens interfere with PTI, however the second layer of plant immunity can recognise these virulence factors resulting in a constant evolutionary battle between plant and pathogen. Xanthomonas campestris pv. vesicatoria (Xcv) is the causal agent of bacterial leaf spot disease in tomato and pepper plants. Like many Gram-negative bacteria, Xcv possesses a type-III secretion system, which it uses to translocate type-III effectors (T3E) into plant cells. Xcv has over 30 T3Es that interfere with the immune response of the host and are important for successful infection. One such effector is the Xanthomonas outer protein M (XopM) that shows no similarity to any other known protein. Characterisation of XopM and its role in virulence was the focus of this work.
While screening a tobacco cDNA library for potential host target proteins, the vesicle-associated membrane protein (VAMP)-associated protein 1-2 like (VAP12) was identified. The interaction between XopM and VAP12 was confirmed in the model species Nicotiana benthamiana and Arabidopsis as well as in tomato, a Xcv host. As plants possess multiple VAP proteins, it was determined that the interaction of XopM and VAP is isoform specific.
It could be confirmed that the major sperm protein (MSP) domain of NtVAP12 is sufficient for binding XopM and that binding can be disrupted by substituting one amino acid (T47) within this domain. Most VAP interactors have at least one FFAT (two phenylalanines [FF] in an acidic tract) related motif, screening the amino acid sequence of XopM showed that XopM has two FFAT-related motifs. Substitution of the second residue of each FFAT motif (Y61/F91) disrupts NtVAP12 binding, suggesting that these motifs cooperatively mediate this interaction. Structural modelling using AlphaFold further confirmed that the unstructured N-terminus of XopM binds NtVAP12 at its MSP domain, which was further confirmed by the generation of truncated XopM variants.
Infection of pepper leaves, with a XopM deficient Xcv strain did not result in a reduction of virulence in comparison to the Xcv wildtype, showing that the function of XopM during infection is redundant. Virus-induced gene silencing of NbVAP12 in N. benthamiana plants also did not affect Xcv virulence, which further indicated that interaction with VAP12 is also non-essential for Xcv virulence. Despite such findings, ectopic expression of wildtype XopM and XopMY61A/F91A in transgenic Arabidopsis seedlings enhanced the growth of a non-pathogenic Pseudomonas syringae pv. tomato (Pst) DC3000 strain. XopM was found to interfere with the PTI response allowing Pst growth independent of its binding to VAP. Furthermore, transiently expressed XopM could suppress reactive oxygen species (ROS; one of the earliest PTI responses) production in N. benthamiana leaves. The FFAT double mutant XopMY61A/F91A as well as the C-terminal truncation variant XopM106-519 could still suppress the ROS response while the N-terminal variant XopM1-105 did not. Suppression of ROS production is therefore independent of VAP binding. In addition, tagging the C-terminal variant of XopM with a nuclear localisation signal (NLS; NLS-XopM106-519) resulted in significantly higher ROS production than the membrane localising XopM106-519 variant, indicating that XopM-induced ROS suppression is localisation dependent.
To further characterise XopM, mass spectrometry techniques were used to identify post-translational modifications (PTM) and potential interaction partners. PTM analysis revealed that XopM contains up to 21 phosphorylation sites, which could influence VAP binding. Furthermore, proteins of the Rab family were identified as potential plant protein interaction partners. Rab proteins serve a multitude of functions including vesicle trafficking and have been previously identified as T3E host targets. Taking this into account, a model of virulence of XopM was proposed, with XopM anchoring itself to VAP proteins to potentially access plasma membrane associated proteins. XopM possibly interferes with vesicle trafficking, which in turn suppresses ROS production through an unknown mechanism.
In this work it was shown that XopM targets VAP proteins. The data collected suggests that this T3E uses VAP12 to anchor itself into the right place to carry out its function. While more work is needed to determine how XopM contributes to virulence of Xcv, this study sheds light onto how adapted pathogens overcome the immune response of their hosts. It is hoped that such knowledge will contribute to the development of crops resistant to Xcv in the future.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
Overcoming natural biomass limitations in gram-negative bacteria through synthetic carbon fixation
(2024)
The carbon demands of an ever-increasing human population and the concomitant rise in net carbon emissions requires CO2 sequestering approaches for production of carbon-containing molecules. Microbial production of carbon-containing products from plant-based sugars could replace current fossil-based production. However, this form of sugar-based microbial production directly competes with human food supply and natural ecosystems. Instead, one-carbon feedstocks derived from CO2 and renewable energy were proposed as an alternative. The one carbon molecule formate is a stable, readily soluble and safe-to-store energetic mediator that can be electrochemically generated from CO2 and (excess off-peak) renewable electricity. Formate-based microbial production could represent a promising approach for a circular carbon economy. However, easy-to-engineer and efficient formate-utilizing microbes are lacking. Multiple synthetic metabolic pathways were designed for better-than-nature carbon fixation. Among them, the reductive glycine pathway was proposed as the most efficient pathway for aerobic formate assimilation. While some of these pathways have been successfully engineered in microbial hosts, these synthetic strains did so far not exceed the performance of natural strains. In this work, I engineered and optimized two different synthetic formate assimilation pathways in gram-negative bacteria to exceed the limits of a natural carbon fixation pathway, the Calvin cycle.
The first chapter solidified Cupriavidus necator as a promising formatotrophic host to produce value-added chemicals. The formate tolerance of C. necator was assessed and a production pathway for crotonate established in a modularized fashion. Last, bioprocess optimization was leveraged to produce crotonate from formate at a titer of 148 mg/L.
In the second chapter, I chromosomally integrated and optimized the synthetic reductive glycine pathway in C. necator using a transposon-mediated selection approach. The insertion methodology allowed selection for condition-specific tailored pathway expression as improved pathway performance led to better growth. I then showed my engineered strains to exceed the biomass yields of the Calvin cycle utilizing wildtype C. necator on formate. This demonstrated for the first time the superiority of a synthetic formate assimilation pathway and by extension of synthetic carbon fixation efforts as a whole.
In chapter 3, I engineered a segment of a synthetic carbon fixation cycle in Escherichia coli. The GED cycle was proposed as a Calvin cycle alternative that does not perform a wasteful oxygenation reaction and is more energy efficient. The pathways simple architecture and reasonable driving force made it a promising candidate for enhanced carbon fixation. I created a deletion strain that coupled growth to carboxylation via the GED pathway segment. The CO2 dependence of the engineered strain and 13C-tracer analysis confirmed operation of the pathway in vivo.
In the final chapter, I present my efforts of implementing the GED cycle also in C. necator, which might be a better-suited host, as it is accustomed to formatotrophic and hydrogenotrophic growth. To provide the carboxylation substrate in vivo, I engineered C. necator to utilize xylose as carbon source and created a selection strain for carboxylase activity. I verify activity of the key enzyme, the carboxylase, in the decarboxylative direction. Although CO2-dependent growth of the strain was not obtained, I showed that all enzymes required for operation of the GED cycle are active in vivo in C. necator.
I then evaluate my success with engineering a linear and cyclical one-carbon fixation pathway in two different microbial hosts. The linear reductive glycine pathway presents itself as a much simpler metabolic solution for formate dependent growth over the sophisticated establishment of hard-to-balance carbon fixation cycles. Last, I highlight advantages and disadvantages of C. necator as an upcoming microbial benchmark organism for synthetic metabolism efforts and give and outlook on its potential for the future of C1-based manufacturing.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Galaxy morphology is a fossil record of how galaxies formed and evolved and can be regarded as a function of the dynamical state of a galaxy. It encodes the physical processes that dominate its evolutionary history, and is strongly aligned with physical properties like stellar mass, star formation rate and local environment. At a distance of ∼50 and 60 kpc, the Magellanic Clouds represent the nearest interacting pair of dwarf irregular galaxies to the Milky Way, rendering them an important test bed for galaxy morphology in the context of galaxy interactions and the effect of the local environment in which they reside. The Large Magellanic Cloud is classified as the prototype for Magellanic Spiral galaxies, with one prominent spiral arm, an offset bar and an inclined rotating disc while the Small Magellanic Cloud is classified as a dwarf Irregular galaxy and is known for its unstructured shape and large depth across the line–of–sight. Resolved stellar populations are powerful probes of a wide range of astrophysical phenomena, the proximity of the Magellanic Clouds allows us to resolve their stellar populations to individual stars that share coherent chemical and age distributions. The coherent properties of resolved stellar populations enable us to analyse them as a function of position within the Magellanic Clouds, offering a picture of the growth of the galaxies’ substructures over time and yielding a comprehensive view of their morphology. Furthermore, investigating the kinematics of the Magellanic Clouds offers valuable insights into their dynamics and evolutionary history. By studying the motions and velocities of stars within these galaxies, we can trace their past interactions, with the Milky Way or with each other and unravel the complex interplay of forces that have influenced the Magellanic Clouds’ formation and evolution.
In Chapter 2, the VISTA survey of the Magellanic Clouds was employed to generate unprecedented high-resolution morphological maps of the Magellanic Clouds in the near-infrared. Utilising colour-magnitude diagrams and theoretical evolutionary models to segregate stellar populations, this approach enabled a comprehensive age tomography of the galaxies. It revealed previously uncharacterised features in their central regions at spatial resolutions of 0.13 kpc (Large Magellanic Cloud) and 0.16 kpc (Small Magellanic Cloud), the findings showcased the impact of tidal interactions on their inner regions. Notably, the study highlighted the enhanced coherent structures in the Large Magellanic Cloud, shedding light on the significant role of the recent Magellanic Clouds’ interaction 200 Myr ago in shaping many of the fine structures. The Small Magellanic Cloud revealed asymmetry in younger populations and irregularities in intermediate-age ones, pointing towards the influence of past tidal interactions.
In Chapter 3, an examination of the outskirts of the Magellanic Clouds led to the identification of new substructures through the use of near-infrared photometry from the VISTA Hemisphere Survey and multi-dimensional phase-space information from Gaia. The distances and proper motions of these substructures were investigated. This analysis revealed the impact of past Magellanic Clouds’ interactions and the influence of the Milky Way’s tidal field on the morphology and kinematics of the Magellanic Clouds. A bi-modal distance distribution was identified within the luminosity function of the red clump stars in the Small Magellanic Cloud, notably in its eastern regions, with the foreground substructure being attributed to the Magellanic Clouds’ interaction around 200 Myr ago. Furthermore, associations with the Counter Bridge and Old Bridge were uncovered through the detection of background and foreground structures in various regions of the SMC.
In chapter 4, a detailed kinematic analysis of the Small Magellanic Cloud was conducted using spectra from the European Southern Observatory Science Archive Facility. The study reveals distinct kinematics in the Wing and bar regions, attributed to interactions with the Large Magellanic Cloud and variations in star formation history. Notably, velocity disparities are observed in the bar’s young main sequence stars, aligning with specific star-forming episodes, and suggesting potential galactic stretching or tidal stripping, as corroborated by proper motion studies.
The reliance on fossil fuels has resulted in an abnormal increase in the concentration of greenhouse gases, contributing to the global climate crisis. In response, a rapid transition to renewable energy sources has begun, particularly lithium-ion batteries, playing a crucial role in the green energy transformation. However, concerns regarding the availability and geopolitical implications of lithium have prompted the exploration of alternative rechargeable battery systems, such as sodium-ion batteries. Sodium is significantly abundant and more homogeneously distributed in the crust and seawater, making it easier and less expensive to extract than lithium. However, because of the mysterious nature of its components, sodium-ion batteries are not yet sufficiently advanced to take the place of lithium-ion batteries. Specifically, sodium exhibits a more metallic character and a larger ionic radius, resulting in a different ion storage mechanism utilized in lithium-ion batteries. Innovations in synthetic methods, post-treatments, and interface engineering clearly demonstrate the significance of developing high-performance carbonaceous anode materials for sodium-ion batteries. The objective of this dissertation is to present a systematic approach for fabricating efficient, high-performance, and sustainable carbonaceous anode materials for sodium-ion batteries. This will involve a comprehensive investigation of different chemical environments and post-modification techniques as well.
This dissertation focuses on three main objectives. Firstly, it explores the significance of post-synthetic methods in designing interfaces. A conformal carbon nitride coating is deposited through chemical vapor deposition on a carbon electrode as an artificial solid-electrolyte interface layer, resulting in improved electrochemical performance. The interaction between the carbon nitride artificial interface and the carbon electrode enhances initial Coulombic efficiency, rate performance, and total capacity. Secondly, a novel process for preparing sulfur-rich carbon as a high-performing anode material for sodium-ion batteries is presented. The method involves using an oligo-3,4-ethylenedioxythiophene precursor for high sulfur content hard carbon anode to investigate the sulfur heteroatom effect on the electrochemical sodium storage mechanism. By optimizing the condensation temperature, a significant transformation in the materials’ nanostructure is achieved, leading to improved electrochemical performance. The use of in-operando small-angle X-ray scattering provides valuable insights into the interaction between micropores and sodium ions during the electrochemical processes. Lastly, the development of high-capacity hard carbon, derived from 5-hydroxymethyl furfural, is examined. This carbon material exhibits exceptional performance at both low and high current densities. Extensive electrochemical and physicochemical characterizations shed light on the sodium storage mechanism concerning the chemical environment, establishing the material’s stability and potential applications in sodium-ion batteries.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
Human-induced climate change is impacting the global water cycle by, e.g., causing changes in precipitation patterns, evapotranspiration dynamics, cryosphere shrinkage, and complex streamflow trends. These changes, coupled with the increased frequency and severity of extreme hydrometeorological events like floods, droughts, and heatwaves, contribute to hydroclimatic disasters, posing significant implications for local and global infrastructure, human health, and overall productivity.
In the tropical Andes, climate change is evident through warming trends, glacier retreats, and shifts in precipitation patterns, leading to altered risks of floods and droughts, e.g., in the upper Amazon River basin. Projections for the region indicate rising temperatures, potential glacier disappearance or substantial shrinkage, and altered streamflow patterns, highlighting challenges in water availability due to these expected changes and growing human water demand. The evolving trends in hydroclimatic conditions in the tropical Andes present significant challenges to socioeconomic and environmental systems, emphasizing the need for a comprehensive understanding to guide effective adaptation policies and strategies in response to the impacts of climate change in the region.
The main objective of this thesis is to investigate current hydrological dynamics in the tropical Andes of Peru and Ecuador and their responses to climate change. Given the scarcity of hydrometeorological data in the region, this objective was accomplished through a comprehensive data preparation and analysis in combination with hydrological modeling using the Soil and Water Assessment Tool (SWAT) eco-hydrological model. In this context, the initial steps involved assessing, identifying, and/or generating more reliable climate input data to address data limitations.
The thesis introduces RAIN4PE, a high-resolution precipitation dataset for Peru and Ecuador, developed by merging satellite, reanalysis, and ground-based data with surface elevation through the random forest method. Further adjustments of precipitation estimates were made for catchments influenced by fog/cloud water input on the eastern side of the Andes using streamflow data and applying the method of reverse hydrology. RAIN4PE surpasses other global and local precipitation datasets, showcasing superior reliability and accuracy in representing precipitation patterns and simulating hydrological processes across the tropical Andes. This establishes it as the optimal precipitation product for hydrometeorological applications in the region.
Due to the significant biases and limitations of global climate models (GCMs) in representing key atmospheric variables over the tropical Andes, this study developed regionally adapted GCM simulations specifically tailored for Peru and Ecuador. These simulations are known as the BASD-CMIP6-PE dataset, and they were derived using reliable, high-resolution datasets like RAIN4PE as reference data. The BASD-CMIP6-PE dataset shows notable improvements over raw GCM simulations, reflecting enhanced representations of observed climate properties and accurate simulation of streamflow, including high and low flow indices. This renders it suitable for assessing regional climate change impacts on agriculture, water resources, and hydrological extremes.
In addition to generating more accurate climatic input data, a reliable hydrological model is essential for simulating watershed hydrological processes. To tackle this challenge, the thesis presents an innovative multiobjective calibration framework integrating remote sensing vegetation data, baseflow index, discharge goodness-of-fit metrics, and flow duration curve signatures. In contrast to traditional calibration strategies relying solely on discharge goodness-of-fit metrics, this approach enhances the simulation of vegetation, streamflow, and the partitioning of flow into surface runoff and baseflow in a typical Andean catchment. The refined hydrological model calibration strategy was applied to conduct reliable simulations and understand current and future hydrological trajectories in the tropical Andes.
By establishing a region-suitable and thoroughly tested hydrological model with high-resolution and reliable precipitation input data from RAIN4PE, this study provides new insights into the spatiotemporal distribution of water balance components in Peru and transboundary catchments. Key findings underscore the estimation of Peru's total renewable freshwater resource (total river runoff of 62,399 m3/s), with the Peruvian Amazon basin contributing 97.7%. Within this basin, the Amazon-Andes transition region emerges as a pivotal hotspot for water yield (precipitation minus evapotranspiration), characterized by abundant rainfall and lower atmospheric water demand/evapotranspiration. This finding underlines its paramount role in influencing the hydrological variability of the entire Amazon basin.
Subsurface hydrological pathways, particularly baseflow from aquifers, strongly influence water yield in lowland and Andean catchments, sustaining streamflow, especially during the extended dry season. Water yield demonstrates an elevation- and latitude-dependent increase in the Pacific Basin (catchments draining into the Pacific Ocean), while it follows an unimodal curve in the Peruvian Amazon Basin, peaking in the Amazon-Andes transition region. This observation indicates an intricate relationship between water yield and elevation.
In Amazon lowlands rivers, particularly in the Ucayali River, floodplains play a significant role in shaping streamflow seasonality by attenuating and delaying peak flows for up to two months during periods of high discharge. This observation underscores the critical importance of incorporating floodplain dynamics into hydrological simulations and river management strategies for accurate modeling and effective water resource management.
Hydrological responses vary across different land use types in high Andean catchments. Pasture areas exhibit the highest water yield, while agricultural areas and mountain forests show lower yields, emphasizing the importance of puna (high-altitude) ecosystems, such as pastures, páramos, and bofedales, in regulating natural storage.
Projected future hydrological trajectories were analyzed by driving the hydrological model with regionalized GCM simulations provided by the BASD-CMIP6-PE dataset. The analysis considered sustainable (low warming, SSP1-2.6) and fossil fuel-based development (high-end warming, SSP5-8.5) scenarios for the mid (2035-2065) and end (2065-2095) of the century. The projected changes in water yield and streamflow across the tropical Andes exhibit distinct regional and seasonal variations, particularly amplified under a high-end warming scenario towards the end of the century. Projections suggest year-round increases in water yield and streamflow in the Andean regions and decreases in the Amazon lowlands, with exceptions such as the northern Amazon expecting increases during wet seasons. Despite these regional differences, the upper Amazon River's streamflow is projected to remain relatively stable throughout the 21st century. Additionally, projections anticipate a decrease in low flows in the Amazon lowlands and an increased risk of high flows (floods) in the Andean and northern Amazon catchments.
This thesis significantly contributes to enhancing climatic data generation, overcoming regional limitations that previously impeded hydrometeorological research, and creating new opportunities. It plays a crucial role in advancing hydrological model calibration, improving the representation of internal hydrological processes, and achieving accurate results for the right reasons. Novel insights into current hydrological dynamics in the tropical Andes are fundamental for improving water resource management. The anticipated intensified changes in water flows and hydrological extreme patterns under a high-end warming scenario highlight the urgency of implementing emissions mitigation and adaptation measures to address the heightened impacts on water resources.
In fact, the new datasets (RAIN4PE and BASD-CMIP6-PE) have already been utilized by researchers and experts in regional and local-scale projects and catchments in Peru and Ecuador. For instance, they have been applied in river catchments such as Mantaro, Piura, and San Pedro to analyze local historical and future developments in climate and water resources.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Astrophysical shocks, driven by explosive events such as supernovae, efficiently accelerate charged particles to relativistic energies. The majority of these shocks occur in collisionless plasmas where the energy transfer is dominated by particle-wave interactions.Strong nonrelativistic shocks found in supernova remnants are plausible sites of galactic cosmic ray production, and the observed emission indicates the presence of nonthermal electrons. To participate in the primary mechanism of energy gain - Diffusive Shock Acceleration - electrons must have a highly suprathermal energy, implying a need for very efficient pre-acceleration. This poorly understood aspect of the shock acceleration theory is known as the electron injection problem. Studying electron-scale phenomena requires the use of fully kinetic particle-in-cell (PIC) simulations, which describe collisionless plasma from first principles.
Most published studies consider a homogenous upstream medium, but turbulence is ubiquitous in astrophysical environments and is typically driven at magnetohydrodynamic scales, cascading down to kinetic scales. For the first time, I investigate how preexisting turbulence affects electron acceleration at nonrelativistic shocks using the fully kinetic approach. To accomplish this, I developed a novel simulation framework that allows the study of shocks propagating in turbulent media. It involves simulating slabs of turbulent plasma separately, which are further continuously inserted into a shock simulation. This demands matching of the plasma slabs at the interface. A new procedure of matching electromagnetic fields and currents prevents numerical transients, and the plasma evolves self-consistently. The versatility of this framework has the potential to render simulations more consistent with turbulent systems in various astrophysical environments.
In this Thesis, I present the results of 2D3V PIC simulations of high-Mach-number nonrelativistic shocks with preexisting compressive turbulence in an electron-ion plasma. The chosen amplitudes of the density fluctuations ($\lesssim15\%$) concord with \textit{in situ} measurements in the heliosphere and the local interstellar medium. I explored how these fluctuations impact the dynamics of upstream electrons, the driving of the plasma instabilities, electron heating and acceleration. My results indicate that while the presence of the turbulence enhances variations in the upstream magnetic field, their levels remain too low to influence the behavior of electrons at perpendicular shocks significantly. However, the situation is different at oblique shocks. The external magnetic field inclined at an angle between $50^\circ \lesssim \theta_\text{Bn} \lesssim 75^\circ$ relative to the shock normal allows the escape of fast electrons toward the upstream region. An extended electron foreshock region is formed, where these particles drive various instabilities. Results of an oblique shock with $\theta_\text{Bn}=60^\circ$ propagating in preexisting compressive turbulence show that the foreshock becomes significantly shorter, and the shock-reflected electrons have higher temperatures. Furthermore, the energy spectrum of downstream electrons shows a well-pronounced nonthermal tail that follows a power law with an index up to -2.3.
The methods and results presented in this Thesis could serve as a starting point for more realistic modeling of interactions between shocks and turbulence in plasmas from first principles.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
Data preparation stands as a cornerstone in the landscape of data science workflows, commanding a significant portion—approximately 80%—of a data scientist's time. The extensive time consumption in data preparation is primarily attributed to the intricate challenge faced by data scientists in devising tailored solutions for downstream tasks. This complexity is further magnified by the inadequate availability of metadata, the often ad-hoc nature of preparation tasks, and the necessity for data scientists to grapple with a diverse range of sophisticated tools, each presenting its unique intricacies and demands for proficiency.
Previous research in data management has traditionally concentrated on preparing the content within columns and rows of a relational table, addressing tasks, such as string disambiguation, date standardization, or numeric value normalization, commonly referred to as data cleaning. This focus assumes a perfectly structured input table. Consequently, the mentioned data cleaning tasks can be effectively applied only after the table has been successfully loaded into the respective data cleaning environment, typically in the later stages of the data processing pipeline.
While current data cleaning tools are well-suited for relational tables, extensive data repositories frequently contain data stored in plain text files, such as CSV files, due to their adaptable standard. Consequently, these files often exhibit tables with a flexible layout of rows and columns, lacking a relational structure. This flexibility often results in data being distributed across cells in arbitrary positions, typically guided by user-specified formatting guidelines.
Effectively extracting and leveraging these tables in subsequent processing stages necessitates accurate parsing. This thesis emphasizes what we define as the “structure” of a data file—the fundamental characters within a file essential for parsing and comprehending its content. Concentrating on the initial stages of the data preprocessing pipeline, this thesis addresses two crucial aspects: comprehending the structural layout of a table within a raw data file and automatically identifying and rectifying any structural issues that might hinder its parsing. Although these issues may not directly impact the table's content, they pose significant challenges in parsing the table within the file.
Our initial contribution comprises an extensive survey of commercially available data preparation tools. This survey thoroughly examines their distinct features, the lacking features, and the necessity for preliminary data processing despite these tools. The primary goal is to elucidate the current state-of-the-art in data preparation systems while identifying areas for enhancement. Furthermore, the survey explores the encountered challenges in data preprocessing, emphasizing opportunities for future research and improvement.
Next, we propose a novel data preparation pipeline designed for detecting and correcting structural errors. The aim of this pipeline is to assist users at the initial preprocessing stage by ensuring the correct loading of their data into their preferred systems. Our approach begins by introducing SURAGH, an unsupervised system that utilizes a pattern-based method to identify dominant patterns within a file, independent of external information, such as data types, row structures, or schemata. By identifying deviations from the dominant pattern, it detects ill-formed rows. Subsequently, our structure correction system, TASHEEH, gathers the identified ill-formed rows along with dominant patterns and employs a novel pattern transformation algebra to automatically rectify errors. Our pipeline serves as an end-to-end solution, transforming a structurally broken CSV file into a well-formatted one, usually suitable for seamless loading.
Finally, we introduce MORPHER, a user-friendly GUI integrating the functionalities of both SURAGH and TASHEEH. This interface empowers users to access the pipeline's features through visual elements. Our extensive experiments demonstrate the effectiveness of our data preparation systems, requiring no user involvement. Both SURAGH and TASHEEH outperform existing state-of-the-art methods significantly in both precision and recall.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
Resolving the evolutionary history of two hippotragin antelopes using archival and ancient DNA
(2024)
African antelopes are iconic but surprisingly understudied in terms of their genetics, especially when it comes to their evolutionary history and genetic diversity. The age of genomics provides an opportunity to investigate evolution using whole nuclear genomes. Decreasing sequencing costs enable the recovery of multiple loci per genome, giving more power to single specimen analyses and providing higher resolution insights into species and populations that can help guide conservation efforts. This age of genomics has only recently begun for African antelopes. Many African bovids have a declining population trend and hence, are often endangered. Consequently, contemporary samples from the wild are often hard to collect. In these cases, ex situ samples from contemporary captive populations or in the form of archival or ancient DNA (aDNA) from historical museum or archaeological/paleontological specimens present a great research opportunity with the latter two even offering a window to information about the past. However, the recovery of aDNA is still considered challenging from regions with prevailing climatic conditions that are deemed adverse for DNA preservation like the African continent. This raises the question if DNA recovery from fossils as old as the early Holocene from these regions is possible.
This thesis focuses on investigating the evolutionary history and genetic diversity of two species: the addax (Addax nasomaculatus) and the blue antelope (Hippotragus leucophaeus). The addax is critically endangered and might even already be extinct in the wild, while the blue antelope became extinct ~1800 AD, becoming the first extinct large African mammal species in historical times. Together, the addax and the blue antelope can inform us about current and past extinction events and the knowledge gained can help guide conservation efforts of threatened species. The three studies used ex situ samples and present the first nuclear whole genome data for both species. The addax study used historical museum specimens and a contemporary sample from a captive population. The two studies on the blue antelope used mainly historical museum specimens but also fossils, and resulted in the recovery of the oldest paleogenome from Africa at that time.
The aim of the first study was to assess the genetic diversity and the evolutionary history of the addax. It found that the historical wild addax population showed only limited phylogeographic structuring, indicating that the addax was a highly mobile and panmictic population and suggesting that the current European captive population might be missing the majority of the historical mitochondrial diversity. It also found the nuclear and mitochondrial diversity in the addax to be rather low compared to other wild ungulate species. Suggestions on how to best save the remaining genetic diversity are presented. The European zoo population was shown to exhibit no or only minor levels of inbreeding, indicating good prospects for the restoration of the species in the wild. The trajectory of the addax’s effective population size indicated a major bottleneck in the late Pleistocene and a low effective population size well before recent human impact led to the species being critically endangered today.
The second study set out to investigate the identities of historical blue antelope specimens using aDNA techniques. Results showed that six out of ten investigated specimens were misidentified, demonstrating the blue antelope to be one of the scarcest mammal species in historical natural history collections, with almost no bone reference material. The preliminary analysis of the mitochondrial genomes suggested a low diversity and hence low population size at the time of the European colonization of southern Africa.
Study three presents the results of the analyses of two blue antelope nuclear genomes, one ~200 years old and another dating to the early Holocene, 9,800–9,300 cal years BP. A fossil-calibrated phylogeny dated the divergence time of the three historically extant Hippotragus species to ~2.86 Ma and demonstrated the blue and the sable antelope (H. niger) to be sister species. In addition, ancient gene flow from the roan (H. equinus) into the blue antelope was detected. A comparison with the roan and the sable antelope indicated that the blue antelope had a much lower nuclear diversity, suggesting a low population size since at least the early Holocene. This concurs with findings from the fossil record that show a considerable decline in abundance after the Pleistocene–Holocene transition. Moreover, it suggests that the blue antelope persisted throughout the Holocene regardless of a low population size, indicating that human impact in the colonial era was a major factor in the blue antelope’s extinction.
This thesis uses aDNA analyses to provide deeper insights into the evolutionary history and genetic diversity of the addax and the blue antelope. Human impact likely was the main driver of extinction in the blue antelope, and is likely the main factor threatening the addax today. This thesis demonstrates the value of ex situ samples for science and conservation, and suggests to include genetic data for conservation assessments of species. It further demonstrates the beneficial use of aDNA for the taxonomic identification of historically important specimens in natural history collections. Finally, the successful retrieval of a paleogenome from the early Holocene of Africa using shotgun sequencing shows that DNA retrieval from samples of that age is possible from regions generally deemed unfavorable for DNA preservation, opening up new research opportunities. All three studies enhance our knowledge of African antelopes, contributing to the general understanding of African large mammal evolution and to the conservation of these and similarly threatened species.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
Prediction is often regarded as a central and domain-general aspect of cognition. This proposal extends to language, where predictive processing might enable the comprehension of rapidly unfolding input by anticipating upcoming words or their semantic features. To make these predictions, the brain needs to form a representation of the predictive patterns in the environment. Predictive processing theories suggest a continuous learning process that is driven by prediction errors, but much is still to be learned about this mechanism in language comprehension. This thesis therefore combined three electroencephalography (EEG) experiments to explore the relationship between prediction and implicit learning at the level of meaning.
Results from Study 1 support the assumption that the brain constantly infers und updates probabilistic representations of the semantic context, potentially across multiple levels of complexity. N400 and P600 brain potentials could be predicted by semantic surprise based on a probabilistic estimate of previous exposure and a more complex probability representation, respectively.
Subsequent work investigated the influence of prediction errors on the update of semantic predictions during sentence comprehension. In line with error-based learning, unexpected sentence continuations in Study 2 ¬– characterized by large N400 amplitudes ¬– were associated with increased implicit memory compared to expected continuations. Further, Study 3 indicates that prediction errors not only strengthen the representation of the unexpected word, but also update specific predictions made from the respective sentence context. The study additionally provides initial evidence that the amount of unpredicted information as reflected in N400 amplitudes drives this update of predictions, irrespective of the strength of the original incorrect prediction.
Together, these results support a central assumption of predictive processing theories: A probabilistic predictive representation at the level of meaning that is updated by prediction errors. They further propose the N400 ERP component as a possible learning signal. The results also emphasize the need for further research regarding the role of the late positive ERP components in error-based learning. The continuous error-based adaptation described in this thesis allows the brain to improve its predictive representation with the aim to make better predictions in the future.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
The global drylands cover nearly half of the terrestrial surface and are home to more than two billion people. In many drylands, ongoing land-use change transforms near-natural savanna vegetation to agricultural land to increase food production. In Southern Africa, these heterogenous savanna ecosystems are also recognized as habitats of many protected animal species, such as elephant, lion and large herds of diverse herbivores, which are of great value for the tourism industry. Here, subsistence farmers and livestock herder communities often live in close proximity to nature conservation areas. Although these land-use transformations are different regarding the future they aspire to, both processes, nature conservation with large herbivores and agricultural intensification, have in common, that they change the vegetation structure of savanna ecosystems, usually leading to destruction of trees, shrubs and the woody biomass they consist of.
Such changes in woody vegetation cover and biomass are often regarded as forms of land degradation and forest loss. Global forest conservation approaches and international programs aim to stop degradation processes, also to conserve the carbon bound within wood from volatilization into earth’s atmosphere. In search for mitigation options against global climate change savannas are increasingly discussed as potential carbon sinks. Savannas, however, are not forests, in that they are naturally shaped by and adapted to disturbances, such as wildfires and herbivory. Unlike in forests, disturbances are necessary for stable, functioning savanna ecosystems and prevent these ecosystems from forming closed forest stands. Their consequently lower levels of carbon storage in woody vegetation have long been the reason for savannas to be overlooked as a potential carbon sink but recently the question was raised if carbon sequestration programs (such as REDD+) could also be applied to savanna ecosystems. However, heterogenous vegetation structure and chronic disturbances hamper the quantification of carbon stocks in savannas, and current procedures of carbon storage estimation entail high uncertainties due to methodological obstacles. It is therefore challenging to assess how future land-use changes such as agricultural intensification or increasing wildlife densities will impact the carbon storage balance of African drylands.
In this thesis, I address the research gap of accurately quantifying carbon storage in vegetation and soils of disturbance-prone savanna ecosystems. I further analyse relevant drivers for both ecosystem compartments and their implications for future carbon storage under land-use change. Moreover, I show that in savannas different carbon storage pools vary in their persistence to disturbance, causing carbon bound in shrub vegetation to be most likely to experience severe losses under land-use change while soil organic carbon stored in subsoils is least likely to be impacted by land-use change in the future.
I start with summarizing conventional approaches to carbon storage assessment and where and for which reasons they fail to accurately estimated savanna ecosystem carbon storage. Furthermore, I outline which future-making processes drive land-use change in Southern Africa along two pathways of land-use transformation and how these are likely to influence carbon storage. In the following chapters, I propose a new method of carbon storage estimation which is adapted to the specific conditions of disturbance-prone ecosystems and demonstrate the advantages of this approach in relation to existing forestry methods. Specifically, I highlight sources for previous over- and underestimation of savanna carbon stocks which the proposed methodology resolves. In the following chapters, I apply the new method to analyse impacts of land-use change on carbon storage in woody vegetation in conjunction with the soil compartment. With this interdisciplinary approach, I can demonstrate that indeed both, agricultural intensification and nature conservation with large herbivores, reduce woody carbon storage above- and belowground, but partly sequesters this carbon into the soil organic carbon stock. I then quantify whole-ecosystem carbon storage in different ecosystem compartments (above- and belowground woody carbon in shrubs and trees, respectively, as well as topsoil and subsoil organic carbon) of two savanna vegetation types (scrub savanna and savanna woodland). Moreover, in a space-for-time substitution I analyse how land-use changes impact carbon storage in each compartment and in the whole ecosystem. Carbon storage compartments are found to differ in their persistence to land-use change with carbon bound in shrub biomass being least persistent to future changes and subsoil organic carbon being most stable under changing land-use. I then explore which individual land-use change effects act as drivers of carbon storage through Generalized Additive Models (GAMs) and uncover non-linear effects, especially of elephant browsing, with implications for future carbon storage. In the last chapter, I discuss my findings in the larger context of this thesis and discuss relevant implications for land-use change and future-making decisions in rural Africa.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
The urban heat island (UHI) effect, describing an elevated temperature of urban areas compared with their natural surroundings, can expose urban dwellers to additional heat stress, especially during hot summer days. A comprehensive understanding of the UHI dynamics along with urbanization is of great importance to efficient heat stress mitigation strategies towards sustainable urban development. This is, however, still challenging due to the difficulties of isolating the influences of various contributing factors that interact with each other. In this work, I present a systematical and quantitative analysis of how urban intrinsic properties (e.g., urban size, density, and morphology) influence UHI intensity.
To this end, we innovatively combine urban growth modelling and urban climate simulation to separate the influence of urban intrinsic factors from that of background climate, so as to focus on the impact of urbanization on the UHI effect. The urban climate model can create a laboratory environment which makes it possible to conduct controlled experiments to separate the influences from different driving factors, while the urban growth model provides detailed 3D structures that can be then parameterized into different urban development scenarios tailored for these experiments. The novelty in the methodology and experiment design leads to the following achievements of our work.
First, we develop a stochastic gravitational urban growth model that can generate 3D structures varying in size, morphology, compactness, and density gradient. We compare various characteristics, like fractal dimensions (box-counting, area-perimeter scaling, area-population scaling, etc.), and radial gradient profiles of land use share and population density, against those of real-world cities from empirical studies. The model shows the capability of creating 3D structures resembling real-world cities. This model can generate 3D structure samples for controlled experiments to assess the influence of some urban intrinsic properties in question. [Chapter 2]
With the generated 3D structures, we run several series of simulations with urban structures varying in properties like size, density and morphology, under the same weather conditions. Analyzing how the 2m air temperature based canopy layer urban heat island (CUHI) intensity varies in response to the changes of the considered urban factors, we find the CUHI intensity of a city is directly related to the built-up density and an amplifying effect that urban sites have on each other. We propose a Gravitational Urban Morphology (GUM) indicator to capture the neighbourhood warming effect. We build a regression model to estimate the CUHI intensity based on urban size, urban gross building volume, and the GUM indicator. Taking the Berlin area as an example, we show the regression model capable of predicting the CUHI intensity under various urban development scenarios. [Chapter 3]
Based on the multi-annual average summer surface urban heat island (SUHI) intensity derived from Land surface temperature, we further study how urban intrinsic factors influence the SUHI effect of the 5,000 largest urban clusters in Europe. We find a similar 3D GUM indicator to be an effective predictor of the SUHI intensity of these European cities. Together with other urban factors (vegetation condition, elevation, water coverage), we build different multivariate linear regression models and a climate space based Geographically Weighted Regression (GWR) model that can better predict SUHI intensity. By investigating the roles background climate factors play in modulating the coefficients of the GWR model, we extend the multivariate linear model to a nonlinear one by integrating some climate parameters, such as the average of daily maximal temperature and latitude. This makes it applicable across a range of background climates. The nonlinear model outperforms linear models in SUHI assessment as it captures the interaction of urban factors and the background climate. [Chapter 4]
Our work reiterates the essential roles of urban density and morphology in shaping the urban thermal environment. In contrast to many previous studies that link bigger cities with higher UHI intensity, we show that cities larger in the area do not necessarily experience a stronger UHI effect. In addition, the results extend our knowledge by demonstrating the influence of urban 3D morphology on the UHI effect. This underlines the importance of inspecting cities as a whole from the 3D perspective. While urban 3D morphology is an aggregated feature of small-scale urban elements, the influence it has on the city-scale UHI intensity cannot simply be scaled up from that of its neighbourhood-scale components. The spatial composition and configuration of urban elements both need to be captured when quantifying urban 3D morphology as nearby neighbourhoods also cast influences on each other. Our model serves as a useful UHI assessment tool for the quantitative comparison of urban intervention/development scenarios. It can support harnessing the capacity of UHI mitigation through optimizing urban morphology, with the potential of integrating climate change into heat mitigation strategies.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km.
The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term.
The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions.
The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Organizations are investing billions on innovation and agility initiatives to stay competitive in their increasingly uncertain business environments. Design Thinking, an innovation approach based on human-centered exploration, ideation and experimentation, has gained increasing popularity. The market for Design Thinking, including software products and general services, is projected to reach 2.500 million $ (US-Dollar) by 2028. A dispersed set of positive outcomes have been attributed to Design Thinking. However, there is no clear understanding of what exactly comprises the impact of Design Thinking and how it is created. To support a billion-dollar market, it is essential to understand the value Design Thinking is bringing to organizations not only to justify large investments, but to continuously improve the approach and its application.
Following a qualitative research approach combined with results from a systematic literature review, the results presented in this dissertation offer a structured understanding of Design Thinking impact. The results are structured along two main perspectives of impact: the individual and the organizational perspective. First, insights from qualitative data analysis demonstrate that measuring and assessing the impact of Design Thinking is currently one central challenge for Design Thinking practitioners in organizations. Second, the interview data revealed several effects Design Thinking has on individuals, demonstrating how Design Thinking can impact boundary management behaviors and enable employees to craft their jobs more actively.
Contributing to innovation management research, the work presented in this dissertation systematically explains the Design Thinking impact, allowing other researchers to both locate and integrate their work better. The results of this research advance the theoretical rigor of Design Thinking impact research, offering multiple theoretical underpinnings explaining the variety of Design Thinking impact. Furthermore, this dissertation contains three specific propositions on how Design Thinking creates an impact: Design Thinking creates an impact through integration, enablement, and engagement. Integration refers to how Design Thinking enables organizations through effectively combining things, such as for example fostering balance between exploitation and exploration activities. Through Engagement, Design Thinking impacts organizations involving users and other relevant stakeholders in their work. Moreover, Design Thinking creates impact through Enablement, making it possible for individuals to enact a specific behavior or experience certain states.
By synthesizing multiple theoretical streams into these three overarching themes, the results of this research can help bridge disciplinary boundaries, for example between business, psychology and design, and enhance future collaborative research. Practitioners benefit from the results as multiple desirable outcomes are detailed in this thesis, such as successful individual job crafting behaviors, which can be expected from practicing Design Thinking. This allows practitioners to enact more evidence-based decision-making concerning Design Thinking implementation. Overall, considering multiple levels of impact as well as a broad range of theoretical underpinnings are paramount to understanding and fostering Design Thinking impact.
Mantodea, commonly known as mantids, have captivated researchers owing to their enigmatic behavior and ecological significance. This order comprises a diverse array of predatory insects, boasting over 2,400 species globally and inhabiting a wide spectrum of ecosystems. In Iran, the mantid fauna displays remarkable diversity, yet numerous facets of this fauna remain poorly understood, with a significant dearth of systematic and ecological research. This substantial knowledge gap underscores the pressing need for a comprehensive study to advance our understanding of Mantodea in Iran and its neighboring regions.
The principal objective of this investigation was to delve into the ecology and phylogeny of Mantodea within these areas. To accomplish this, our research efforts concentrated on three distinct genera within Iranian Mantodea. These genera were selected due to their limited existing knowledge base and feasibility for in-depth study. Our comprehensive methodology encompassed a multifaceted approach, integrating morphological analysis, molecular techniques, and ecological observations.
Our research encompassed a comprehensive revision of the genus Holaptilon, resulting in the description of four previously unknown species. This extensive effort substantially advanced our understanding of the ecological roles played by Holaptilon and refined its systematic classification. Furthermore, our investigation into Nilomantis floweri expanded its known distribution range to include Iran. By conducting thorough biological assessments, genetic analyses, and ecological niche modeling, we obtained invaluable insights into distribution patterns and genetic diversity within this species. Additionally, our research provided a thorough comprehension of the life cycle, behaviors, and ecological niche modeling of Blepharopsis mendica, shedding new light on the distinctive characteristics of this mantid species. Moreover, we contributed essential knowledge about parasitoids that infect mantid ootheca, laying the foundation for future studies aimed at uncovering the intricate mechanisms governing ecological and evolutionary interactions between parasitoids and Mantodea.
This thesis presents a comprehensive exploration of the application of DNA origami nanofork antennas (DONAs) in the field of spectroscopy, with a particular focus on the structural analysis of Cytochrome C (CytC) at the single-molecule level. The research encapsulates the design, optimization, and application of DONAs in enhancing the sensitivity and specificity of Raman spectroscopy, thereby offering new insights into protein structures and interactions.
The initial phase of the study involved the meticulous optimization of DNA origami structures. This process was pivotal in developing nanoscale tools that could significantly enhance the capabilities of Raman spectroscopy. The optimized DNA origami nanoforks, in both dimer and aggregate forms, demonstrated an enhanced ability to detect and analyze molecular vibrations, contributing to a more nuanced understanding of protein dynamics.
A key aspect of this research was the comparative analysis between the dimer and aggregate forms of DONAs. This comparison revealed that while both configurations effectively identified oxidation and spin states of CytC, the aggregate form offered a broader range of detectable molecular states due to its prolonged signal emission and increased number of molecules. This extended duration of signal emission in the aggregates was attributed to the collective hotspot area, enhancing overall signal stability and sensitivity.
Furthermore, the study delved into the analysis of the Amide III band using the DONA system. Observations included a transient shift in the Amide III band's frequency, suggesting dynamic alterations in the secondary structure of CytC. These shifts, indicative of transitions between different protein structures, were crucial in understanding the protein’s functional mechanisms and interactions.
The research presented in this thesis not only contributes significantly to the field of spectroscopy but also illustrates the potential of interdisciplinary approaches in biosensing. The use of DNA origami-based systems in spectroscopy has opened new avenues for research, offering a detailed and comprehensive understanding of protein structures and interactions. The insights gained from this research are expected to have lasting implications in scientific fields ranging from drug development to the study of complex biochemical pathways. This thesis thus stands as a testament to the power of integrating nanotechnology, biochemistry, and spectroscopic techniques in addressing complex scientific questions.
Volcanic hydrothermal systems are an integral part of most volcanoes and typically involve a heat source, adequate fluid supply, and fracture or pore systems through which the fluids can circulate within the volcanic edifice. Associated with this are subtle but powerful processes that can significantly influence the evolution of volcanic activity or the stability of the near-surface volcanic system through mechanical weakening, permeability reduction, and sealing of the affected volcanic rock. These processes are well constrained for rock samples by laboratory analyses but are still difficult to extrapolate and evaluate at the scale of an entire volcano. Advances in unmanned aircraft systems (UAS), sensor technology, and photogrammetric processing routines now allow us to image volcanic surfaces at the centimeter scale and thus study volcanic hydrothermal systems in great detail. This thesis aims to explore the potential of UAS approaches for studying the structures, processes, and dynamics of volcanic hydrothermal systems but also to develop methodological approaches to uncover secondary information hidden in the data, capable of indicating spatiotemporal dynamics or potentially critical developments associated with hydrothermal alteration. To accomplish this, the thesis describes the investigation of two near-surface volcanic hydrothermal systems, the El Tatio geyser field in Chile and the fumarole field of La Fossa di Vulcano (Italy), both of which are among the best-studied sites of their kind. Through image analysis, statistical, and spatial analyses we have been able to provide the most detailed structural images of both study sites to date, with new insights into the driving forces of such systems but also revealing new potential controls, which are summarized in conceptual site-specific models. Furthermore, the thesis explores methodological remote sensing approaches to detect, classify and constrain hydrothermal alteration and surface degassing from UAS-derived data, evaluated them by mineralogical and chemical ground-truthing, and compares the alteration pattern with the present-day degassing activity. A significant contribution of the often neglected diffuse degassing activity to the total amount of degassing is revealed and constrains secondary processes and dynamics associated with hydrothermal alteration that lead to potentially critical developments like surface sealing. The results and methods used provide new approaches for alteration research, for the monitoring of degassing and alteration effects, and for thermal monitoring of fumarole fields, with the potential to be incorporated into volcano monitoring routines.
As followers are becoming more educated and better connected, empowering leadership has gained traction in recent times as an alternative to traditional top-down models of leadership. Several scholars have investigated the relationship between empowering leadership and other variables in different contexts. As most previous studies have focused on the positive aspects of empowering leadership, research on its potential dark side is scarce. Furthermore, no previous study has examined whether and how the transfer of workload from followers to leaders can occur over time, which I proposed can lead to emotional exhaustion and work-family conflict among leaders. Therefore, I proposed that despite the positive outcomes of empowering leadership for both followers and leaders, it may also trigger negative outcomes capable of affecting the well-being of leaders. Drawing on the Conservation of Resources (COR) theory, Job Demand-Resources (JD-R) theory, and Too-Much-of-a-Good-Thing (TMGT) effect model, I investigated this idea. Using follower workload as a moderator, I proposed that the relationship between empowering leadership and leader workload is positive when follower workload is high and negative when follower workload is low. In addition, I examined how empowering leadership interacts with follower workload to affect leader emotional exhaustion and work-family conflict, mediated by leader workload. I proposed that this interaction results in a negative relationship between empowering leadership and both outcomes when follower workload is low, and a positive relationship when it is high.
I tested these hypotheses using data from a three-wave time-lagged design field study with 65 leader-follower dyads consisting of civil servants from different administrative entities of India and Pakistan. The time lag between each study variable was four weeks. At Time 1 (T1), followers answered questions about demographic characteristics, virtual interaction with their leaders, their workload, and the extent to which their leaders practice empowering leadership. At the same time, leaders answered questions about demographic characteristics and their job satisfaction. At Time 2 (T2), leaders provided data on their own workload. Finally, at Time 3 (T3), leaders rated their emotional exhaustion and work-family conflict. A moderated mediation model was tested using PROCESS Model 7 in R. The findings of the study reveal that a significant increase in follower workload through empowering leadership will also increase the leader's workload. Consequently, this increased leader workload leads to a crossover of this interactive effect onto the level of emotional exhaustion and work-family conflict experienced by leaders.
This research offers various contributions to the leadership literature. While empowering leadership has been commonly associated with positive outcomes, my study reveals that it can also lead to negative outcomes. In addition, it shifts the focus of existing research from the effect of empowering leadership on followers to the consequences that it might have for leaders themselves. Overall, my research underscores the need for leaders to consider the potential counterproductive effects of empowering leadership and tailor their approach accordingly.
Laser induced switching offers an attractive possibility to manipulate small magnetic domains for prospective memory and logic devices on ultrashort time scales. Moreover, optical control of magnetization without high applied magnetic fields allows manipulation of magnetic domains individually and locally, without expensive heat dissipation. One of the major challenges for developing novel optically controlled magnetic memory and logic devices is reliable formation and annihilation of non-volatile magnetic domains that can serve as memory bits in ambient conditions. Magnetic skyrmions, topologically nontrivial spin textures, have been studied intensively since their discovery due to their stability and scalability in potential spintronic devices. However, skyrmion formation and, especially, annihilation processes are still not completely understood and further investigation on such mechanisms are needed. The aim of this thesis is to contribute to better understanding of the physical processes behind the optical control of magnetism in thin films, with the goal of optimizing material parameters and methods for their potential use in next generation memory and logic devices.
First part of the thesis is dedicated to investigation of all-optical helicity-dependent switching (AO-HDS) as a method for magnetization manipulation. AO-HDS in Co/Pt multilayer and CoFeB alloys with and without the presence of Dzyaloshinskii-Moriya interaction (DMI), which is a type of exchange interaction, have been investigated by magnetic imaging using photo-emission electron microscopy (PEEM) in combination with X-ray magnetic circular dichroism (XMCD). The results show that in a narrow range of the laser fluence, circularly polarized laser light induces a drag on domain walls. This enables a local deterministic transformation of the magnetic domain pattern from stripes to bubbles in out-of-plane magnetized Co/Pt multilayers, only controlled by the helicity of ultrashort laser pulses. The temperature and characteristic fields at which the stripe-bubble transformation occurs has been calculated using theory for isolated magnetic bubbles, using as parameters experimentally determined average size of stripe domains and the magnetic layer thickness.
The second part of the work aims at purely optical formation and annihilation of magnetic skyrmions by a single laser pulse. The presence of a skyrmion phase in the investigated CoFeB alloys was first confirmed using a Kerr microscope. Then the helicity-dependent skyrmion manipulation was studied using AO-HDS at different laser fluences. It was found that formation or annihilation individual skyrmions using AO-HDS is possible, but not always reliable, as fluctuations in the laser fluence or position can easily overwrite the helicity-dependent effect of AO-HDS. However, the experimental results and magnetic simulations showed that the threshold values for the laser fluence for the formation and annihilation of skyrmions are different. A higher fluence is required for skyrmion formation, and existing skyrmions can be annihilated by pulses with a slightly lower fluence. This provides a further option for controlling formation and annihilation of skyrmions using the laser fluence. Micromagnetic simulations provide additional insights into the formation and annihilation mechanism.
The ability to manipulate the magnetic state of individual skyrmions is of fundamental importance for magnetic data storage technologies. Our results show for the first time that the optical formation and annihilation of skyrmions is possible without changing the external field. These results enable further investigations to optimise the magnetic layer to maximise the energy gap between the formation and annihilation barrier. As a result, unwanted switching due to small laser fluctuations can be avoided and fully deterministic optical switching can be achieved.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Massive stars (Mini > 8 Msol) are the key feedback agents within galaxies, as they shape their surroundings via their powerful winds, ionizing radiation, and explosive supernovae. Most massive stars are born in binary systems, where interactions with their companions significantly alter their evolution and the feedback they deposit in their host galaxy. Understanding binary evolution, particularly in the low-metallicity environments as proxies for the Early Universe, is crucial for interpreting the rest-frame ultraviolet spectra observed in high-redshift galaxies by telescopes like Hubble and James Webb.
This thesis aims to tackle this challenge by investigating in detail massive binaries within the low-metallicity environment of the Small Magellanic Cloud galaxy. From ultraviolet and multi-epoch optical spectroscopic data, we uncovered post-interaction binaries. To comprehensively characterize these binary systems, their stellar winds, and orbital parameters, we use a multifaceted approach. The Potsdam Wolf-Rayet stellar atmosphere code is employed to obtain the stellar and wind parameters of the stars. Additionally, we perform consistent light and radial velocity fitting with the Physics of Eclipsing Binaries software, allowing for the independent determination of orbital parameters and component masses. Finally, we utilize these results to challenge the standard picture of stellar evolution and improve our understanding of low-metallicity stellar populations by calculating our binary evolution models with the Modules for Experiments in Stellar Astrophysics code.
We discovered the first four O-type post-interaction binaries in the SMC (Chapters 2, 5, and 6). Their primary stars have temperatures similar to other OB stars and reside far from the helium zero-age main sequence, challenging the traditional view of binary evolution. Our stellar evolution models suggest this may be due to enhanced mixing after core-hydrogen burning. Furthermore, we discovered the so-far most massive binary system undergoing mass transfer (Chapter 3), offering a unique opportunity to test mass-transfer efficiency in extreme conditions. Our binary evolution calculations revealed unexpected evolutionary pathways for accreting stars in binaries, potentially providing the missing link to understanding the observed Wolf-Rayet population within the SMC (Chapter 4). The results presented in this thesis unveiled the properties of massive binaries at low-metallicity which challenge the way the spectra of high-redshift galaxies are currently being analyzed as well as our understanding of massive-star feedback within galaxies.
Mindful Eating
(2024)
Maladaptive eating behaviors such as emotional eating, external eating, and loss-of-control eating are widespread in the general population. Moreover, they are associated to adverse health outcomes and well-known for their role in the development and maintenance of eating disorders and obesity (i.e., eating and weight disorders). Eating and weight disorders are associated with crucial burden for individuals as well as high costs for society in general. At the same time, corresponding treatments yield poor outcomes. Thus, innovative concepts are needed to improve prevention and treatment of these conditions.
The Buddhist concept of mindfulness (i.e., paying attention to the present moment without judgement) and its delivery via mindfulness-based intervention programs (MBPs) has gained wide popularity in the area of maladaptive eating behaviors and associated eating and weight disorders over the last two decades. Though previous findings on their effects seem promising, the current assessment of mindfulness and its mere application via multi-component MBPs hampers to draw conclusions on the extent to which mindfulness-immanent qualities actually account for the effects (e.g., the modification of maladaptive eating behaviors). However, this knowledge is pivotal for interpreting previous effects correctly and for avoiding to cause harm in particularly vulnerable groups such as those with eating and weight disorders.
To address these shortcomings, recent research has focused on the context-specific approach of mindful eating (ME) to investigate underlying mechanisms of action. ME can be considered a subdomain of generic mindfulness describing it specifically in relation to the process of eating and associated feelings, thoughts, and motives, thus including a variety of different attitudes and behaviors. However, there is no universal operationalization and the current assessment of ME suffers from different limitations. Specifically, current measurement instruments are not suited for a comprehensive assessment of the multiple facets of the construct that are currently discussed as important in the literature. This in turn hampers comparisons of different ME facets which would allow to evaluate their particular effect on maladaptive eating behaviors. This knowledge is needed to tailor prevention and treatment of associated eating and weight disorders properly and to explore potential underlying mechanisms of action which have so far been proposed mainly on theoretical grounds.
The dissertation at hand aims to provide evidence-based fundamental research that contributes to our understanding of how mindfulness, more specifically its context-specific form of ME, impacts maladaptive eating behaviors and, consequently, how it could be used appropriately to enrich the current prevention and treatment approaches for eating and weight disorders in the future.
Specifically, in this thesis, three scientific manuscripts applying several qualitative and quantitative techniques in four sequential studies are presented. These manuscripts were published in or submitted to three scientific peer-reviewed journals to shed light on the following questions:
I. How can ME be measured comprehensively and in a reliable and valid way to advance the understanding of how mindfulness works in the context of eating?
II. Does the context-specific construct of ME have an advantage over the generic concept in advancing the understanding of how mindfulness is related to maladaptive eating behaviors?
III. Which ME facets are particularly useful in explaining maladaptive eating behaviors?
IV. Does training a particular ME facet result in changes in maladaptive eating behaviors?
To answer the first research question (Paper 1), a multi-method approach using three subsequent studies was applied to develop and validate a comprehensive self-report instrument to assess the multidimensional construct of ME - the Mindful Eating Inventory (MEI). Study 1 aimed to create an initial version of the MEI by following a three-step approach: First, a comprehensive item pool was compiled by including selected and adapted items of the existing ME questionnaires and supplementing them with items derived from an extensive literature review. Second, the preliminary item pool was complemented and checked for content validity by experts in the field of eating behavior and/or mindfulness (N = 15). Third, the item pool was further refined through qualitative methods: Three focus groups comprising laypersons (N = 16) were used as a check for applicability. Subsequently, think-aloud protocols (N = 10) served as a last check of comprehensibility and elimination of ambiguities.
The resulting initial MEI version was tested in Study 2 in an online convenience sample (N = 828) to explore its factor structure using exploratory factor analysis (EFA). Results were used to shorten the questionnaire in accordance with qualitative and quantitative criteria yielding the final MEI version which encompasses 30 items. These items were assigned to seven ME facets: (1) ‘Accepting and Non-attached Attitude towards one’s own eating experience’ (ANA), (2) ‘Awareness of Senses while Eating’ (ASE), (3) ‘Eating in Response to awareness of Fullness‘ (ERF), (4) ‘Awareness of eating Triggers and Motives’ (ATM), (5) ‘Interconnectedness’ (CON), (6) ‘Non-Reactive Stance’ (NRS) and (7) Focused Attention on Eating’ (FAE).
Study 3 sought to confirm the found facets and the corresponding factor structure in an independent online convenience sample (N = 612) using confirmatory factor analysis (CFA). The study served as further indication of the assumed multidimensionality of ME (the correlational seven-factor model was shown to be superior to a single-factor model). Psychometric properties of the MEI, regarding factorial validity, internal consistency, retest-reliability, and observed criterion validity using a wide range of eating-specific and general health-related outcomes, showed the inventory to be suitable for a comprehensive, reliable and valid assessment of ME. These findings were complemented by demonstrating measurement invariance of the MEI regarding gender. In accordance with the factor structure of the MEI, Paper 1 offers an empirically-derived definition of ME, succeeding in overcoming ambiguities and problems of previous attempts at defining the construct.
To answer the second and third research questions (Paper 2) a subsample of Study 2 from the MEI validation studies (N = 292) was analyzed. Incremental validity of ME beyond generic mindfulness was shown using hierarchical regression models concerning the outcome variables of maladaptive eating behaviors (emotional eating and uncontrolled eating) and nutrition behaviors (consumption of energy-dense food). Multiple regression analyses were applied to investigate the impact of the seven different ME facets (identified in Paper 1) on the same outcome variables. The following ME facets significantly contributed to explaining variance in maladaptive eating and nutrition behaviors: Accepting and Non-attached Attitude towards one`s own eating experience (ANA), Eating in Response to awareness of Fullness (ERF), the Awareness of eating Triggers and Motives (ATM), and a Non-Reactive Stance (NRS, i.e., an observing, non-impulsive attitude towards eating triggers). Results suggest that these ME facets are promising variables to consider when a) investigating potential underlying mechanisms of mindfulness and MBPs in the context of eating and b) addressing maladaptive eating behaviors in general as well as in the prevention and treatment of eating and weight disorders.
To answer the fourth research question (Paper 3), a training on an isolated exercise (‘9 Hunger’) based on the previously identified ME facet ATM was designed to explore its particular association with changes in maladaptive eating behaviors and thus to preliminary explore one possible mechanism of action. The online study was realized using a randomized controlled trial (RCT) design. Latent Change Scores (LCS) across three measurement points (before the training, directly after the training and three months later) were compared between the intervention group (n = 211) and a waitlist control group (n = 188). Short- and longer-term effects of the training could be shown on maladaptive eating behaviors (emotional eating, external eating, loss-of-control eating) and associated outcomes (intuitive eating, ME, self-compassion, well-being). Findings serve as preliminary empirical evidence that MBPs might influence maladaptive eating behaviors through an enhanced non-judgmental awareness of and distinguishment between eating motives and triggers (i.e., ATM). This mechanism of action had previously only been hypothesized from a theoretical perspective. Since maladaptive eating behaviors are associated with eating and weight disorders, the findings can enhance our understanding of the general effects of MBPs on these conditions.
The integration of the different findings leads to several suggestions of how ME might enrich different kinds of future interventions on maladaptive eating behaviors to improve health in general or the prevention and treatment of eating and weight disorders in particular. Strengths of the thesis (e.g., deliberate specific methodology, variety of designs and methods, high number of participants) are emphasized. The main limitations particularly regarding sample characteristics (e.g., higher level of formal education, fewer males, self-selected) are discussed to arrive at an outline for future studies (e.g., including multi-modal-multi-method approaches, clinical eating disorder samples and youth samples) to improve upcoming research on ME and underlying mechanisms of action of MBPs for maladaptive eating behaviors and associated eating and weight disorders.
This thesis enriches current research on mindfulness in the context of eating by providing fundamental research on the core of the ME construct. Thereby it delivers a reliable and valid instrument to comprehensively assess ME in future studies as well as an operational definition of the construct. Findings on ME facet level might inform upcoming research and practice on how to address maladaptive eating behaviors appropriately in interventions. The ME skill ‘Awareness of eating Triggers and Motives (ATM)’ as one particular mechanism of action should be further investigated in representative community and specific clinical samples to examine the validity of the results in these groups and to justify an application of the concept to the general population as well as to subgroups with eating and weight disorders in particular.
In conclusion, findings of the current thesis can be used to set future research on mindfulness, more specifically ME, and its underlying mechanism in the context of eating on a more evidence-based footing. This knowledge can inform upcoming prevention and treatment to tailor MBPs on maladaptive eating behaviors and associated eating and weight disorders appropriately.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
The evaluation of process-oriented cognitive theories through time-ordered observations is crucial for the advancement of cognitive science. The findings presented herein integrate insights from research on eye-movement control and sentence comprehension during reading, addressing challenges in modeling time-ordered data, statistical inference, and interindividual variability. Using kernel density estimation and a pseudo-marginal likelihood for fixation durations and locations, a likelihood implementation of the SWIFT model of eye-movement control during reading (Engbert et al., Psychological Review, 112, 2005, pp. 777–813) is proposed. Within the broader framework of data assimilation, Bayesian parameter inference with adaptive Markov Chain Monte Carlo techniques is facilitated for reliable model fitting. Across the different studies, this framework has shown to enable reliable parameter recovery from simulated data and prediction of experimental summary statistics. Despite its complexity, SWIFT can be fitted within a principled Bayesian workflow, capturing interindividual differences and modeling experimental effects on reading across different geometrical alterations of text. Based on these advancements, the integrated dynamical model SEAM is proposed, which combines eye-movement control, a traditionally psychological research area, and post-lexical language processing in the form of cue-based memory retrieval (Lewis & Vasishth, Cognitive Science, 29, 2005, pp. 375–419), typically the purview of psycholinguistics. This proof-of-concept integration marks a significant step forward in natural language comprehension during reading and suggests that the presented methodology can be useful to develop complex cognitive dynamical models that integrate processes at levels of perception, higher cognition, and (oculo-)motor control. These findings collectively advance process-oriented cognitive modeling and highlight the importance of Bayesian inference, individual differences, and interdisciplinary integration for a holistic understanding of reading processes. Implications for theory and methodology, including proposals for model comparison and hierarchical parameter inference, are briefly discussed.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.
This study focuses on William Faulkner, whose works explore the demise of the slavery-based Old South during the Civil War in a highly experimental narrative style. Central to this investigation is the analysis of the temporal dimensions of both individual and collective guilt, thus offering a new approach to the often-discussed problem of Faulkner’s portrayal of social decay. The thesis examines how Faulkner re-narrates the legacy of the Old South as a guilt narrative and argues that Faulkner uses guilt in order to corroborate his concept of time and the idea of the continuity of the past. The focus of the analysis is on three of Faulkner’s arguably most important novels: The Sound and the Fury, Absalom, Absalom!, and Go Down, Moses. Each of these novels features a main character deeply overwhelmed by the crimes of the past, whether private, familial, or societal. As a result, guilt is explored both from a domestic as well as a social perspective. In order to show how Faulkner blends past and present by means of guilt, this work examines several methods and motifs borrowed from different fields and genres with which Faulkner narratively negotiates guilt. These include religious notions of original sin, the motif of the ancestral curse prevalent in the Southern Gothic genre, and the psychological concept of trauma. Each of these motifs emphasizes the temporal dimensions of guilt, which are the core of this study, and makes clear that guilt in Faulkner’s work is primarily to be understood as a temporal rather than a moral problem.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Additive manufacturing (AM) processes enable the production of metal structures with exceptional design freedom, of which laser powder bed fusion (PBF-LB) is one of the most common. In this process, a laser melts a bed of loose feedstock powder particles layer-by-layer to build a structure with the desired geometry. During fabrication, the repeated melting and rapid, directional solidification create large temperature gradients that generate large thermal stress. This thermal stress can itself lead to cracking or delamination during fabrication. More often, large residual stresses remain in the final part as a footprint of the thermal stress. This residual stress can cause premature distortion or even failure of the part in service. Hence, knowledge of the residual stress field is critical for both process optimization and structural integrity.
Diffraction-based techniques allow the non-destructive characterization of the residual stress fields. However, such methods require a good knowledge of the material of interest, as certain assumptions must be made to accurately determine residual stress. First, the measured lattice plane spacings must be converted to lattice strains with the knowledge of a strain-free material state. Second, the measured lattice strains must be related to the macroscopic stress using Hooke's law, which requires knowledge of the stiffness of the material. Since most crystal structures exhibit anisotropic material behavior, the elastic behavior is specific to each lattice plane of the single crystal. Thus, the use of individual lattice planes in monochromatic diffraction residual stress analysis requires knowledge of the lattice plane-specific elastic properties. In addition, knowledge of the microstructure of the material is required for a reliable assessment of residual stress.
This work presents a toolbox for reliable diffraction-based residual stress analysis. This is presented for a nickel-based superalloy produced by PBF-LB. First, this work reviews the existing literature in the field of residual stress analysis of laser-based AM using diffraction-based techniques. Second, the elastic and plastic anisotropy of the nickel-based superalloy Inconel 718 produced by PBF-LB is studied using in situ energy dispersive synchrotron X-ray and neutron diffraction techniques. These experiments are complemented by ex situ material characterization techniques. These methods establish the relationship between the microstructure and texture of the material and its elastic and plastic anisotropy. Finally, surface, sub-surface, and bulk residual stress are determined using a texture-based approach. Uncertainties of different methods for obtaining stress-free reference values are discussed.
The tensile behavior in the as-built condition is shown to be controlled by texture and cellular sub-grain structure, while in the heat-treated condition the precipitation of strengthening phases and grain morphology dictate the behavior. In fact, the results of this thesis show that the diffraction elastic constants depend on the underlying microstructure, including texture and grain morphology. For columnar microstructures in both as-built and heat-treated conditions, the diffraction elastic constants are best described by the Reuss iso-stress model. Furthermore, the low accumulation of intergranular strains during deformation demonstrates the robustness of using the 311 reflection for the diffraction-based residual stress analysis with columnar textured microstructures. The differences between texture-based and quasi-isotropic approaches for the residual stress analysis are shown to be insignificant in the observed case. However, the analysis of the sub-surface residual stress distributions show, that different scanning strategies result in a change in the orientation of the residual stress tensor. Furthermore, the location of the critical sub-surface tensile residual stress is related to the surface roughness and the microstructure. Finally, recommendations are given for the diffraction-based determination and evaluation of residual stress in textured additively manufactured alloys.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor.
In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo.
In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations.
The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO).
The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.
Virtual Reality (VR) leads to the highest level of immersion if presented using a 1:1 mapping of virtual space to physical space—also known as real walking. The advent of inexpensive consumer virtual reality (VR) headsets, all capable of running inside-out position tracking, has brought VR to the home. However, many VR applications do not feature full real walking, but instead, feature a less immersive space-saving technique known as instant teleportation. Given that only 0.3% of home users run their VR experiences in spaces more than 4m2, the most likely explanation is the lack of the physical space required for meaningful use of real walking. In this thesis, we investigate how to overcome this hurdle. We demonstrate how to run 1:1-mapped VR experiences in small physical spaces and we explore the trade-off between space and immersion. (1) We start with a space limit of 15cm. We present DualPanto, a device that allows (blind) VR users to experience the virtual world from a 1:1 mapped bird’s eye perspective—by leveraging haptics. (2) We then relax our space constraints to 50cm, which is what seated users (e.g., on an airplane or train ride) have at their disposal. We leverage the space to represent a standing user in 1:1 mapping, while only compressing the user’s arm movement. We demonstrate our 4 prototype VirtualArms at the example of VR experiences limited to arm movement, such as boxing. (3) Finally, we relax our space constraints further to 3m2 of walkable space, which is what 75% of home users have access to. As well- established in the literature, we implement real walking with the help of portals, also known as “impossible spaces”. While impossible spaces on such dramatic space constraints tend to degenerate into incomprehensible mazes (as demonstrated, for example, by “TraVRsal”), we propose plausibleSpaces: presenting meaningful virtual worlds by adapting various visual elements to impossible spaces. Our techniques push the boundary of spatially meaningful VR interaction in various small spaces. We see further future challenges for new design approaches to immersive VR experiences for the smallest physical spaces in our daily life.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
Heat stress (HS) is a major abiotic stress that negatively affects plant growth and productivity. However, plants have developed various adaptive mechanisms to cope with HS, including the acquisition and maintenance of thermotolerance, which allows them to respond more effectively to subsequent stress episodes. HS memory includes type II transcriptional memory which is characterized by enhanced re-induction of a subset of HS memory genes upon recurrent HS. In this study, new regulators of HS memory in A. thaliana were identified through the characterization of rein mutants.
The rein1 mutant carries a premature stop in CYCLIN-DEPENDENT-KINASE 8 (CDK8) which is part of the cyclin kinase module of the Mediator complex. Rein1 seedlings show impaired type II transcriptional memory in multiple heat-responsive genes upon re-exposure to HS. Additionally, the mutants exhibit a significant deficiency in HS memory at the physiological level. Interaction studies conducted in this work indicate that CDK8 associates with the memory HEAT SHOCK FACTORs HSAF2 and HSFA3. The results suggest that CDK8 plays a crucial role in HS memory in plants together with other memory HSFs, which may be potential targets of the CDK8 kinase function. Understanding the role and interaction network of the Mediator complex during HS-induced transcriptional memory will be an exciting aspect of future HS memory research.
The second characterized mutant, rein2, was selected based on its strongly impaired pAPX2::LUC re-induction phenotype. In gene expression analysis, the mutant revealed additional defects in the initial induction of HS memory genes. Along with this observation, basal thermotolerance was impaired similarly as HS memory at the physiological level in rein2. Sequencing of backcrossed bulk segregants with subsequent fine mapping narrowed the location of REIN2 to a 1 Mb region on chromosome 1. This interval contains the At1g65440 gene, which encodes the histone chaperone SPT6L. SPT6L interacts with chromatin remodelers and bridges them to the transcription machinery to regulate nucleosome and Pol II occupancy around the transcriptional start site. The EMS-induced missense mutation in SPT6L may cause altered HS-induced gene expression in rein2, possibly triggered by changes in the chromatin environment resulting from altered histone chaperone function.
Expanding research on screen-derived factors that modify type II transcriptional memory has the potential to enhance our understanding of HS memory in plants. Discovering connections between previously identified memory factors will help to elucidate the underlying network of HS memory. This knowledge can initiate new approaches to improve heat resilience in crops.
The European Alps are amongst the regions with highest glacier mass loss rates over the last decades. Under the threat of ongoing climate change, the ability to predict glacier mass balance changes for water and risk management purposes has become imperative. This raises an urgent need for reliable glacier models. The European Alps do not only host glaciers, but also numerous caves containing carbonate formations, called speleothems. Previous studies have shown that those speleothems also grew during times when the cave was covered by a warm-based glacier. In this thesis, I utilise speleothems from the European Alps as archives of local, environmental conditions related to mountain glacier evolution.
Previous studies have shown that speleothem isotope data from the Alps can be strongly affected by in-cave processes. Therefore, part of this thesis focusses on developing an isotope evolution model, which successfully reproduces differences between contemporaneous growing speleothems. The model is used to propose correction approaches for prior calcite precipitation effects on speleothem oxygen isotopes (δ18O). Applications on speleothem records from caves outside of the Alps demonstrate that corrected δ18O agrees better with other records and climate model simulations.
Existing speleothem growth histories and carbon isotope (δ13C) records from Alpine caves located at different elevations are used to infer soil vs. glacier cover and the thermal regime of the glacier over the last glacial cycle. The compatibility with glacier evolution models is statistically assessed. A general agreement between speleothem δ13C-derived information on soil vs. glacier presence and modelled glacier coverage is found. However, glacier retreat during Marine Isotope Stage (MIS) 3 seems to be underestimated by the model. Furthermore, speleothem data provides evidence of surface temperature above the freezing point which is, however, not fully reproduced by the simulations.
History of glacier cover and their thermal regime is explored for the high-elevation cave system Melchsee-Frutt in the Swiss Alps. Based on new (MIS 9b – MIS 7b, MIS 2) and available speleothem δ13C (MIS 7a – 5d) data, warm-based glacier cover is inferred for MIS 8, 7d, 6, and 2. Also a short period of cold-based ice coverage is found for early MIS 6. In a detailed multi-proxy analysis (δ18O, δ13C, Mg/Ca and Sr/Ca), millennial-scale changes in the glacier-related source of the water infiltrating in the karst during MIS 8 and 7d are found and linked to Northern Hemisphere climate variability.
While speleothem records from high-elevation cave sites in the Alps exhibit huge potential for glacier reconstruction, several limitations remain, which are discussed throughout this thesis. Ultimately, recommendations are given to further leverage subglacial speleothems as an archive of glacier dynamics.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
With Arctic ground as a huge and temperature-sensitive carbon reservoir, maintaining low ground temperatures and frozen conditions to prevent further carbon emissions that contrib-ute to global climate warming is a key element in humankind’s fight to maintain habitable con-ditions on earth. Former studies showed that during the late Pleistocene, Arctic ground condi-tions were generally colder and more stable as the result of an ecosystem dominated by large herbivorous mammals and vast extents of graminoid vegetation – the mammoth steppe. Characterised by high plant productivity (grassland) and low ground insulation due to animal-caused compression and removal of snow, this ecosystem enabled deep permafrost aggrad-ation. Now, with tundra and shrub vegetation common in the terrestrial Arctic, these effects are not in place anymore. However, it appears to be possible to recreate this ecosystem local-ly by artificially increasing animal numbers, and hence keep Arctic ground cold to reduce or-ganic matter decomposition and carbon release into the atmosphere.
By measuring thaw depth, total organic carbon and total nitrogen content, stable carbon iso-tope ratio, radiocarbon age, n-alkane and alcohol characteristics and assessing dominant vegetation types along grazing intensity transects in two contrasting Arctic areas, it was found that recreating conditions locally, similar to the mammoth steppe, seems to be possible. For permafrost-affected soil, it was shown that intensive grazing in direct comparison to non-grazed areas reduces active layer depth and leads to higher TOC contents in the active layer soil. For soil only frozen on top in winter, an increase of TOC with grazing intensity could not be found, most likely because of confounding factors such as vertical water and carbon movement, which is not possible with an impermeable layer in permafrost. In both areas, high animal activity led to a vegetation transformation towards species-poor graminoid-dominated landscapes with less shrubs. Lipid biomarker analysis revealed that, even though the available organic material is different between the study areas, in both permafrost-affected and sea-sonally frozen soils the organic material in sites affected by high animal activity was less de-composed than under less intensive grazing pressure. In conclusion, high animal activity af-fects decomposition processes in Arctic soils and the ground thermal regime, visible from reduced active layer depth in permafrost areas. Therefore, grazing management might be utilised to locally stabilise permafrost and reduce Arctic carbon emissions in the future, but is likely not scalable to the entire permafrost region.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
The European Water Framework Directive (WFD) has identified river morphological alteration and diffuse pollution as the two main pressures affecting water bodies in Europe at the catchment scale. Consequently, river restoration has become a priority to achieve the WFD's objective of good ecological status. However, little is known about the effects of stream morphological changes, such as re-meandering, on in-stream nitrate retention at the river network scale. Therefore, catchment nitrate modeling is necessary to guide the implementation of spatially targeted and cost-effective mitigation measures. Meanwhile, Germany, like many other regions in central Europe, has experienced consecutive summer droughts from 2015-2018, resulting in significant changes in river nitrate concentrations in various catchments. However, the mechanistic exploration of catchment nitrate responses to changing weather conditions is still lacking.
Firstly, a fully distributed, process-based catchment Nitrate model (mHM-Nitrate) was used, which was properly calibrated and comprehensively evaluated at numerous spatially distributed nitrate sampling locations. Three calibration schemes were designed, taking into account land use, stream order, and mean nitrate concentrations, and they varied in spatial coverage but used data from the same period (2011–2019). The model performance for discharge was similar among the three schemes, with Nash-Sutcliffe Efficiency (NSE) scores ranging from 0.88 to 0.92. However, for nitrate concentrations, scheme 2 outperformed schemes 1 and 3 when compared to observed data from eight gauging stations. This was likely because scheme 2 incorporated a diverse range of data, including low discharge values and nitrate concentrations, and thus provided a better representation of within-catchment heterogenous. Therefore, the study suggests that strategically selecting gauging stations that reflect the full range of within-catchment heterogeneity is more important for calibration than simply increasing the number of stations.
Secondly, the mHM-Nitrate model was used to reveal the causal relations between sequential droughts and nitrate concentration in the Bode catchment (3200 km2) in central Germany, where stream nitrate concentrations exhibited contrasting trends from upstream to downstream reaches. The model was evaluated using data from six gauging stations, reflecting different levels of runoff components and their associated nitrate-mixing from upstream to downstream. Results indicated that the mHM-Nitrate model reproduced dynamics of daily discharge and nitrate concentration well, with Nash-Sutcliffe Efficiency ≥ 0.73 for discharge and Kling-Gupta Efficiency ≥ 0.50 for nitrate concentration at most stations. Particularly, the spatially contrasting trends of nitrate concentration were successfully captured by the model. The decrease of nitrate concentration in the lowland area in drought years (2015-2018) was presumably due to (1) limited terrestrial export loading (ca. 40% lower than that of normal years 2004-2014), and (2) increased in-stream retention efficiency (20% higher in summer within the whole river network). From a mechanistic modelling perspective, this study provided insights into spatially heterogeneous flow and nitrate dynamics and effects of sequential droughts, which shed light on water-quality responses to future climate change, as droughts are projected to be more frequent.
Thirdly, this study investigated the effects of stream restoration via re-meandering on in-stream nitrate retention at network-scale in the well-monitored Bode catchment. The mHM-Nitrate model showed good performance in reproducing daily discharge and nitrate concentrations, with median Kling-Gupta values of 0.78 and 0.74, respectively. The mean and standard deviation of gross nitrate retention efficiency, which accounted for both denitrification and assimilatory uptake, were 5.1 ± 0.61% and 74.7 ± 23.2% in winter and summer, respectively, within the stream network. The study found that in the summer, denitrification rates were about two times higher in lowland sub-catchments dominated by agricultural lands than in mountainous sub-catchments dominated by forested areas, with median ± SD of 204 ± 22.6 and 102 ± 22.1 mg N m-2 d-1, respectively. Similarly, assimilatory uptake rates were approximately five times higher in streams surrounded by lowland agricultural areas than in those in higher-elevation, forested areas, with median ± SD of 200 ± 27.1 and 39.1 ± 8.7 mg N m-2 d-1, respectively. Therefore, restoration strategies targeting lowland agricultural areas may have greater potential for increasing nitrate retention. The study also found that restoring stream sinuosity could increase net nitrate retention efficiency by up to 25.4 ± 5.3%, with greater effects seen in small streams. These results suggest that restoration efforts should consider augmenting stream sinuosity to increase nitrate retention and decrease nitrate concentrations at the catchment scale.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.