Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (6514) (remove)
Language
Keywords
- climate change (56)
- Klimawandel (55)
- Modellierung (36)
- Nanopartikel (28)
- machine learning (23)
- Fernerkundung (20)
- Deutschland (19)
- Spracherwerb (19)
- Synchronisation (19)
- remote sensing (18)
Institute
- Institut für Biochemie und Biologie (1044)
- Institut für Physik und Astronomie (781)
- Institut für Chemie (679)
- Institut für Geowissenschaften (507)
- Wirtschaftswissenschaften (407)
- Institut für Ernährungswissenschaft (279)
- Öffentliches Recht (255)
- Bürgerliches Recht (220)
- Historisches Institut (216)
- Institut für Informatik und Computational Science (206)
Ada (Fishman) Maimon
(2023)
Dass vielfältige Inhalte und Meinungen über eine Vielzahl an Medien verbreitet werden, ist für unsere demokratische Gesellschaft heute wichtiger denn je. Gerade deshalb ist es unabdingbar, Meinungsmacht einzelner Medienunternehmen zu verhindern und dadurch zur Meinungsvielfalt beizutragen. Diese bedeutende Aufgabe kommt der Medienkonzentrationskontrolle des Medienstaatsvertrages zu. Doch haben die digitalisierungsbedingten Veränderungen in der Medienlandschaft zu einem inkonsistenten Prüfungsregime der Medienkonzentrationskontrolle geführt, da medienrechtlich aktuell nicht alle für die Meinungsbildung relevanten Medienakteure ausreichend erfasst werden. Die Arbeit untersucht die Thematik im Kontext der nationalen sowie internationalen medien- und wettbewerbsrechtlichen Rahmenbedingungen. Basierend auf den dabei gewonnenen Erkenntnissen wird ein den aktuellen Erfordernissen entsprechender normativer Vorschlag unterbreitet.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
Die Digitalisierung ist ein wesentlicher Bestandteil aktueller Verwaltungsreformen. Trotz der hohen Bedeutung und langjähriger Bemühungen bleibt die Bilanz der Verwaltungsdigitalisierung in Deutschland ambivalent. Diese Studie konzentriert sich auf drei erfolgreiche Digitalisierungsvorhaben aus dem Onlinezugangsgesetz (OZG) und analysiert mittels problemzentrierter Expertenbefragung Einflussfaktoren auf die Umsetzung von OZG-Vorhaben und den Einfluss des Managements in diesem Prozess. Die Analyse erfolgt theoriegeleitet basierend auf dem Ansatz der begrenzten Rationalität und der ökonomischen Theorie der Bürokratie. Die Ergebnisse zeigen, dass anzunehmen ist, dass die identifizierten Einflussfaktoren unterschiedlich auf Nachnutzbarkeit und Reifegrad von Verwaltungsleistungen wirken und als Folgen begrenzter Rationalität im menschlichen Problemlösungsprozess interpretiert werden können. Managerinnen unterstützen die operativen Akteure bei der Umsetzung, indem sie deren begrenzte Rationalität mit geeigneten Strategien adressieren. Dazu können sie Ressourcen bereitstellen, mit ihrer Expertise unterstützen, Informationen zugänglich machen, Entscheidungswege verändern sowie zur Konfliktlösung beitragen. Die Studie bietet wertvolle Einblicke in die tatsächliche Managementpraxis und leitet daraus Empfehlungen für die Umsetzung öffentlicher Digitalisierungsvorhaben sowie für die Steuerung öffentlicher Verwaltungen ab. Diese Studie liefert einen wichtigen Beitrag zum Verständnis des Einflusses des Managements in der Verwaltungsdigitalisierung. Die Studie unterstreicht außerdem die Notwendigkeit weiterer Forschung in diesem Bereich, um die Praktiken und Herausforderungen der Verwaltungsdigitalisierung besser zu verstehen und effektiv zu adressieren.
Die Dissertation untersucht die Entwicklung des Verantwortungseigentums insbesondere anhand der Carl-Zeiss-Stiftung unter Ernst Abbe.
Der Begriff des Verantwortungseigentums wird seit einigen Jahren in der rechtspolitischen Debatte zu alternativen Unternehmens- und Eigentumsformen diskutiert. Dabei wird die Einführung einer eigenen Gesellschaftsform gefordert.
Die Dissertation widmet sich diesen Forderungen und den Entwicklungen des Verantwortungseigentums anhand der Carl-Zeiss-Stiftung und ihrer Stiftungsbetriebe Zeiss und Schott.
Dort wurde bereits Ende des 19. Jahrhunderts eine Form dessen, was Jurist:innen heute unter Verantwortungseigentum verstehen, kautelar-juristisch eingeführt und geprägt.
Ziel und Zweck der Arbeit war es, die Überschneidungen, Parallelen und Unterschiede der Rechtssubjekte zu untersuchen und der Frage auf den Grund zu gehen, ob das Verantwortungseigentum einer längeren Rechtstradition folgt oder eine rein zeitgenössische Idee ist.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Artikel 15 Grundgesetz als sozialistische Utopie? Keineswegs. Die Sozialisierungsnorm gibt dem Gesetzgeber ein Instrument an die Hand, um staatliche Gewährleistungsverantwortung mithilfe gemeinwirtschaftlicher Organisationsformen wahrzunehmen. Sozialisierungsmaßnahmen greifen in das Eigentumsgrundrecht ein. Sie treffen zudem auf grundrechtliche Funktionsgarantien einer marktwirtschaftlichen Ordnung und die unionsrechtliche Systemgarantie zugunsten des freien Wettbewerbs. Die Arbeit untersucht daher die verfassungsrechtlichen Anforderungen an die Sozialisierungsgesetzgebung auf Bundes- und Landesebene einschließlich der gerichtlichen Kontrolle. Ferner zeigt die Arbeit auf, wie sich Sozialisierungsgesetze unionsrechtskonform verhalten können.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
Overcoming natural biomass limitations in gram-negative bacteria through synthetic carbon fixation
(2024)
The carbon demands of an ever-increasing human population and the concomitant rise in net carbon emissions requires CO2 sequestering approaches for production of carbon-containing molecules. Microbial production of carbon-containing products from plant-based sugars could replace current fossil-based production. However, this form of sugar-based microbial production directly competes with human food supply and natural ecosystems. Instead, one-carbon feedstocks derived from CO2 and renewable energy were proposed as an alternative. The one carbon molecule formate is a stable, readily soluble and safe-to-store energetic mediator that can be electrochemically generated from CO2 and (excess off-peak) renewable electricity. Formate-based microbial production could represent a promising approach for a circular carbon economy. However, easy-to-engineer and efficient formate-utilizing microbes are lacking. Multiple synthetic metabolic pathways were designed for better-than-nature carbon fixation. Among them, the reductive glycine pathway was proposed as the most efficient pathway for aerobic formate assimilation. While some of these pathways have been successfully engineered in microbial hosts, these synthetic strains did so far not exceed the performance of natural strains. In this work, I engineered and optimized two different synthetic formate assimilation pathways in gram-negative bacteria to exceed the limits of a natural carbon fixation pathway, the Calvin cycle.
The first chapter solidified Cupriavidus necator as a promising formatotrophic host to produce value-added chemicals. The formate tolerance of C. necator was assessed and a production pathway for crotonate established in a modularized fashion. Last, bioprocess optimization was leveraged to produce crotonate from formate at a titer of 148 mg/L.
In the second chapter, I chromosomally integrated and optimized the synthetic reductive glycine pathway in C. necator using a transposon-mediated selection approach. The insertion methodology allowed selection for condition-specific tailored pathway expression as improved pathway performance led to better growth. I then showed my engineered strains to exceed the biomass yields of the Calvin cycle utilizing wildtype C. necator on formate. This demonstrated for the first time the superiority of a synthetic formate assimilation pathway and by extension of synthetic carbon fixation efforts as a whole.
In chapter 3, I engineered a segment of a synthetic carbon fixation cycle in Escherichia coli. The GED cycle was proposed as a Calvin cycle alternative that does not perform a wasteful oxygenation reaction and is more energy efficient. The pathways simple architecture and reasonable driving force made it a promising candidate for enhanced carbon fixation. I created a deletion strain that coupled growth to carboxylation via the GED pathway segment. The CO2 dependence of the engineered strain and 13C-tracer analysis confirmed operation of the pathway in vivo.
In the final chapter, I present my efforts of implementing the GED cycle also in C. necator, which might be a better-suited host, as it is accustomed to formatotrophic and hydrogenotrophic growth. To provide the carboxylation substrate in vivo, I engineered C. necator to utilize xylose as carbon source and created a selection strain for carboxylase activity. I verify activity of the key enzyme, the carboxylase, in the decarboxylative direction. Although CO2-dependent growth of the strain was not obtained, I showed that all enzymes required for operation of the GED cycle are active in vivo in C. necator.
I then evaluate my success with engineering a linear and cyclical one-carbon fixation pathway in two different microbial hosts. The linear reductive glycine pathway presents itself as a much simpler metabolic solution for formate dependent growth over the sophisticated establishment of hard-to-balance carbon fixation cycles. Last, I highlight advantages and disadvantages of C. necator as an upcoming microbial benchmark organism for synthetic metabolism efforts and give and outlook on its potential for the future of C1-based manufacturing.
Skepticism
(2022)
This dissertation offers new and original readings of three major texts in the history of Western philosophy: Descartes’s “First Meditation,” Kant’s “Transcendental Deduction,” and his “Refutation of Idealism.” The book argues that each text addresses the problem of skepticism and posits that they have a hitherto underappreciated, organic relationship to one another. The dissertation begins with an analysis of Descartes’ “First Meditation,” which I argue offers two distinct and independent skeptical arguments that differ in both aim and scope. I call these arguments the “veil of ideas” argument and the “author of my origin” argument. My reading counters the standard interpretation of the text, which sees it as offering three stages of doubt, namely the occasional fallibility of the senses, the dream hypothesis, and the evil demon hypothesis. Building on this, the central argument of the dissertation is that Kant’s “Transcendental Deduction” actually transforms and radicalizes Descartes’s Author of My Origin argument, reconceiving its meaning within the framework of Kant’s own transcendental idealist philosophy. Finally, I argue that the Refutation of Idealism offers a similarly radicalized version of Descartes’s Veil of Ideas argument, albeit translated into the framework of transcendental idealism.
Wie wurden die Soldaten der Wehrmacht – in der Kaserne und an der Front – von ihren Unteroffizieren und Offizieren behandelt? Wie war deren Menschenführung beeinflusst vom Nationalsozialismus und welche Bedeutung hatte sie für den Zusammenhalt des deutschen Heeres im Zweiten Weltkrieg? Konstantin Franz Eckert schließt, gestützt auf eine breite Quellenbasis, eine wichtige Forschungslücke. Seine Studie zeigt, wie junge Männer auf ihren Militärdienst vorbereitet wurden und was sie von ihren Vorgesetzten erwarteten. Sie weist nach, dass Vorbild und persönlicher Einsatz, Konstruktivität und absolute Unterordnung unter das Gehorsamsprinzip im Dienst des NS-Regimes zentrale Führungselemente der Wehrmacht waren. Zudem wirft sie einen Blick auf die militärische Ausbildung und ordnet die alten Narrative vom »Kasernenhofschleifer« sachlich ein.
The experience of premenstrual syndrome (PMS) affects up to 90% of individuals with an active menstrual cycle and involves a spectrum of aversive physiological and psychological symptoms in the days leading up to menstruation (Tschudin et al., 2010). Despite its high prevalence, the precise origins of PMS remain elusive, with influences ranging from hormonal fluctuations to cognitive, social, and cultural factors (Hunter, 2007; Matsumoto et al., 2013).
Biologically, hormonal fluctuations, particularly in gonadal steroids, are commonly believed to be implicated in PMS, with the central factor being varying susceptibilities to the fluctuations between individuals and cycles (Rapkin & Akopians, 2012). Allopregnanolone (ALLO), a neuroactive steroid and progesterone metabolite, has emerged as a potential link to PMS symptoms (Hantsoo & Epperson, 2020). ALLO is a positive allosteric modulator of the GABAA receptor, influencing inhibitory communication (Rupprecht, 2003; Andréen et al., 2006). Different susceptibility to ALLO fluctuations throughout the cycle may lead to reduced GABAergic signal transmission during the luteal phase of the menstrual cycle.
The GABAergic system's broad influence leads to a number of affected physiological systems, including a consistent reduction in vagally mediated heart rate variability (vmHRV) during the luteal phase (Schmalenberger et al., 2019). This reduction in vmHRV is more pronounced in individuals with high PMS symptoms (Baker et al., 2008; Matsumoto et al., 2007). Fear conditioning studies have shown inconsistent associations with cycle phases, suggesting a complex interplay between physiological parameters and PMS-related symptoms (Carpenter et al., 2022; Epperson et al., 2007; Milad et al., 2006).
The neurovisceral integration model posits that vmHRV reflects the capacity of the central autonomous network (CAN), which is responsible for regulatory processes on behavioral, cognitive, and autonomous levels (Thayer & Lane, 2000, 2009). Fear learning, mediated within the CAN, is suggested to be indicative of vmHRV's capacity for successful
VI
regulation (Battaglia & Thayer, 2022). Given the GABAergic mediation of central inhibitory functional connectivity in the CAN, which may be affected by ALLO fluctuations, this thesis proposes that fluctuating CAN activity in the luteal phase contributes to diverse aversive symptoms in PMS.
A research program was designed to empirically test these propositions. Study 1 investigated fear discrimination during different menstrual cycle phases and its interaction with vmHRV, revealing nuanced effects on acoustic startle response and skin conductance response. While there was heightened fear discrimination in acoustic startle responses in participants in the luteal phase, there was an interaction between menstrual cycle phase and vmHRV in skin conductance responses. In this measure, heightened fear discrimination during the luteal phase was only visible in individuals with high resting vmHRV; those with low vmHRV showed reduced fear discrimination and higher overall responses.
Despite affecting the vast majority of menstruating people, there are very limited tools available to reliably assess these symptoms in the German speaking area. Study 2 aimed at closing this gap, by translating and validating a German version of the short version of the Premenstrual Assessment Form (Allen et al., 1991), providing a reliable tool for future investigations, which closes the gap in PMS questionnaires in the German-speaking research area.
Study 3 employed a diary study paradigm to explore daily associations between vmHRV and PMS symptoms. The results showed clear simultaneous fluctuations between the two constructs with a peak in PMS and a low point in vmHRV a few days before menstruation onset. The association between vmHRV and PMS was driven by psychological PMS symptoms.
Based on the theoretical considerations regarding the neurovisceral perspective on PMS, another interesting construct to consider is attentional control, as it is closely related to functions of the CAN. Study 4 delved into attentional control and vmHRV differences between menstrual cycle phases, demonstrating an interaction between cycle phase and PMS symptoms. In a pilot, we found reduced vmHRV and attentional control during the luteal phase only in participants who reported strong PMS.
While Studies 1-4 provided evidence for the mechanisms underlying PMS, Studies 5 and 6 investigated short- and long-term intervention protocols to ameliorate PMS symptomatology. Study 5 explored the potential of heart rate variability biofeedback (HRVB) in alleviating PMS symptoms and a number of other outcome measures. In a waitlist-control design, participants underwent a 4-week smartphone-based HRVB intervention. The results revealed positive effects on PMS, with larger effect sizes on psychological symptoms, as well as on depressive symptoms, anxiety/stress and attentional control.
Finally, Study 6 examined the acute effects of HRVB on attentional control. The study found positive impact but only in highly stressed individuals.
The thesis, based on this comprehensive research program, expands our understanding of PMS as an outcome of CAN fluctuations mediated by GABAA receptor reactivity. The results largely support the model. These findings not only deepen our understanding of PMS but also offer potential avenues for therapeutic interventions. The promising results of smartphone-based HRVB training suggest a non-pharmacological approach to managing PMS symptoms, although further research is needed to confirm its efficacy.
In conclusion, this thesis illuminates the complex web of factors contributing to PMS, providing valuable insights into its etiological underpinnings and potential interventions. By elucidating the relationships between hormonal fluctuations, CAN activity, and psychological responses, this research contributes to more effective treatments for individuals grappling with the challenges of PMS. The findings hold promise for improving the quality of life for those affected by this prevalent and often debilitating condition.
Der Semi-Parlamentarismus beschreibt das Regierungssystem, in dem die Regierung von einem Teil des Parlaments gewählt wird und abberufen werden kann, von einem anderen Teil des Parlaments aber unabhängig ist. Beide Kammern müssen dabei der Gesetzgebung zustimmen. Dieses von Steffen Ganghof klassifizierte System ergänzt gängige Regierungssystemtypologien, wie sie beispielsweise von David Samuels und Matthew Shugart genutzt werden. Der Semi-Parlamentarismus ist der logische Gegenpart zum Semi-Präsidentialismus, bei dem nur ein Teil der Exekutive von der Legislative abhängt, während im Semi-Parlamentarismus die Exekutive von nur einem Teil der Legislative abhängt. Der Semi-Parlamentarismus verkörpert so ein System der Gewaltenteilung ohne einen exekutiven Personalismus, wie er durch die Direktwahl und Unabhängigkeit der Regierungchef:in im Präsidentialismus hervorgerufen wird. Dadurch ist der Semi-Parlamentarismus geeignet, Unterschiede zwischen Parlamentarismus und Präsidentialismus auf den separaten Einfluss der Gewaltenteilung und des exekutiven Personalismus zurückzuführen. Die Untersuchung des Semi-Parlamentarismus ist daher für die Regierungssystemliteratur insgesamt von Bedeutung. Der Semi-Parlamentarismus ist dabei kein rein theoretisches Konstrukt, sondern existiert im australischen Bundesstaat, den australischen Substaaten und Japan.
Die vorliegende Dissertation untersucht erstmals umfassend die Gesetzgebung der semi-parlamentarischen Staaten als solchen. Der Fokus liegt dabei auf den zweiten Kammern, da diese durch die Unabhängigkeit von der Regierung der eigentliche Ort der Gesetzgebung sind. Die Gesetzgebung in Parlamentarismus und Präsidentialismus unterscheidet sich insbesondere in der Geschlossenheit der Parteien, der Koalitionsbildung und dem legislativen Erfolg der Regierungen. Diese Punkte sind daher auch von besonderem Interesse bei der Analyse des Semi-Parlamentarismus. Die semi-parlamentarischen Staaten unterscheiden sich auch untereinander teilweise erheblich in der institutionellen Ausgestaltung wie den Wahlsystemen oder den verfügbaren Mitteln zur Überwindung von Blockadesituationen. Die Darstellung und die Analyse der Auswirkungen dieser Unterschiede auf die Gesetzgebung ist neben dem Vergleich des Semi-Parlamentarismus mit anderen Systemen das zweite wesentliche Ziel dieser Arbeit.
Als Fundament der Analyse habe ich einen umfangreichen Datensatz erhoben, der alle Legislaturperioden der australischen Staaten zwischen 1997 und 2019 umfasst. Wesentliche Bestandteile des Datensatzes sind alle namentlichen Abstimmungen beider Kammern, alle
eingebrachten und verabschiedeten Gesetzen der Regierung sowie die mit Hilfe eines Expert-Surveys erhobenen Parteipositionen in den relevanten Politikfeldern auf substaatlicher Ebene.
Hauptsächlich mit der Hilfe von Mixed-Effects- und Fractional-Response-Analysen kann ich so zeigen, dass der Semi-Parlamentarismus in vielen Aspekten eher parlamentarischen als präsidentiellen Systemen gleicht. Nur die Koalitionsbildung erfolgt deutlich flexibler und unterscheidet sich daher von der typischen parlamentarischen Koalitionsbildung. Die Analysen legen nahe, dass wesentliche Unterschiede zwischen Parlamentarismus und Präsidentialismus eher auf den exekutiven Personalismus als auf die Gewaltenteilung zurückzuführen sind.
Zwischen den semi-parlamentarischen Staaten scheinen vor allem die Kontrolle des Medians beider Parlamentskammern durch die Regierung und die Möglichkeit der Regierung, die zweite Kammer mitaufzulösen, zu entscheidenden Unterschieden in der Gesetzgebung zu führen. Die Kontrolle des Medians ermöglicht eine flexible Koalitionsbildung und führt zu höheren legislativen Erfolgsraten. Ebenso führt eine möglichst leichte Auflösungsmöglichkeit der zweiten Kammern zu höheren legislativen Erfolgsraten. Die Parteigeschlossenheit ist unabhängig von diesen Aspekten in beiden Kammern der semi-parlamentarischen Parlamente sehr hoch.
Digitale Medien erlangen eine zunehmende Bedeutung für die Gestaltung von unterrichtlichen Lehr- und Lernprozessen (KMK, 2021; Scheiter, 2021). Die erfolgreiche Integration digitaler Medien und die qualitätsvolle Gestaltung digitalgestützten Unterrichts ist dabei abhängig von den digitalen Kompetenzen der beteiligten Lehrkräfte (KMK, 2021; Lachner et al., 2020). Lehrkräftefortbildungen zu Themen digitaler Medien sind in diesem Kontext von großer Relevanz. Die Teilnahme an Fortbildungen zu digitalen Themen kann zur Förderung der (selbsteingeschätzten) digitalen Kompetenzen sowie des digitalgestützten Unterrichts beitragen (KMK, 2021; SWK, 2022). Die Zusammenhänge zwischen Lehrkräftefortbildungen, Kompetenzen und Handeln von Lehrkräften werden auf theoretischer Ebene im Modell der Determinanten und Konsequenzen der professionellen Kompetenz von Lehrkräften nach Kunter et al. (2011) beschrieben. Allerdings ist bislang ungeklärt, inwiefern die für allgemeine Lehrkräftefortbildungen formulierten Zusammenhänge auch auf den digitalen Kontext übertragbar sind. Bisher weisen nur wenige empirische Ergebnisse darauf hin, dass digitalbezogene Lehrkräftefortbildungen mit selbsteingeschätzten digitalen Kompetenzen (z. B. Mayer et al., 2021; Ning et al., 2022; Reisoğlu, 2022) und dem digitalgestützten Unterrichtshandeln zusammenhängen (z. B. Alt, 2018; Gisbert Cervera & Lázaro Cantabrana, 2015). Eine zentrale Rolle für qualitätsvolles Unterrichtshandeln spielen die Handlungskompetenzen von Lehrkräften (Kunter et al., 2011). Auch im digitalen Kontext sind (selbsteingeschätzte) Kompetenzen von Lehrkräften für das unterrichtliche Handeln mit digitalen Medien relevant (z. B. Hatlevik, 2017; Spiteri & Rundgren, 2020). Eine systematische Darstellung von Kompetenzen von Lehrkräften für den unterrichtsbezogenen Einsatz digitaler Medien leistet der European Framework for the Digital Competence of Educators (DigCompEdu; Redecker & Punie, 2017). Jedoch liegen bisher nur wenige empirische Forschungsarbeiten zur Validierung dieses Rahmenmodells vor (z. B. Antonietti et al., 2022). Dabei bietet das DigCompEdu-Modell im Vergleich zu anderen Kompetenzmodellen wie beispielsweise dem Modell des Technological Pedagogical Content Knowledge (TPACK; Mishra & Koehler, 2006) einen differenzierten Blick auf verschiedene Kompetenzdimensionen.
Die aufgezeigten Forschungslücken aufnehmend, befasst sich die vorliegende Dissertation mit den Faktoren, die zu einer hohen Unterrichtsqualität im digitalgestützten Unterricht beitragen. Die drei empirischen Studien dieser Dissertation behandeln aus verschiedenen Perspektiven die Zusammenhänge zwischen der Teilnahme an digitalbezogenen Lehrkräftefortbildungen, den selbsteingeschätzten digitalen Kompetenzen von Lehrkräften und der selbstberichteten digitalgestützten Unterrichtsqualität. Die Studien orientieren sich dabei theoriegeleitet an den Annahmen des Modells der Determinanten und Konsequenzen der professionellen Kompetenz von Lehrkräften nach Kunter et al. (2011).
Studie 1 widmet sich der Frage, inwieweit die Teilnahme an digitalbezogenen Fortbildungen und Lehrkräftekooperationen im digitalen Kontext mit selbsteingeschätzten digitalen Kompetenzen, Interesse am digitalgestützten Unterrichten und qualitätsvollem Unterrichten mit digitalen Medien zusammenhängen. Die Ergebnisse manifester Pfadmodelle verdeutlichen, dass die Teilnahme an digitalbezogenen Fortbildungen und Kooperationen mit hohen digitalen Kompetenzen, einem hohen Interesse am digitalgestützten Unterrichten und einem selbstberichteten häufigen Einsatz digitaler Medien zur Umsetzung qualitätsvollen Unterrichtens (kognitive Aktivierung und Individualisierung) einhergingen. Digitalgestütztes Unterrichtshandeln wurde in bisherigen empirischen Studien vorrangig über die Nutzungshäufigkeit digitaler Medien im Unterricht erhoben, welche jedoch keine Rückschlüsse auf die Qualität des Einsatzes zulässt (Lachner et al., 2020; Scheiter, 2021). Der qualitätsvolle Einsatz digitaler Medien entlang der drei generischen Basisdimensionen (Klieme et al., 2009) wird daher in allen drei Studien der Dissertation berücksichtigt. In Studie 1 konnte zudem gezeigt werden, dass die selbsteingeschätzten digitalen Kompetenzen im Bereich TPACK die querschnittlichen Zusammenhänge zwischen der Teilnahmehäufigkeit von Lehrkräften an digitalbezogenen Fortbildungen und der Nutzungshäufigkeit digitaler Medien zur Umsetzung von kognitiver Aktivierung und Individualisierung vermitteln.
In Studie 2 wurden Skalen zur Erfassung selbsteingeschätzter digitaler Kompetenzen basierend auf dem DigCompEdu-Modell entwickelt und getestet. Konkret wurde dabei die Kompetenzdimension der Lernerorientierung mit den Subdimensionen Differenzierung und Aktive Einbindung von Schüler*innen in den Blick genommen. Die Ergebnisse der durchgeführten Strukturgleichungsmodellierungen legen eine bifaktorielle Faktorstruktur nahe, die sowohl die zwei theoretisch angenommenen Subdimensionen repräsentiert als auch einen generellen Faktor beinhaltet, der sich als übergreifende Lernerorientierung interpretiert lässt. Die selbsteingeschätzten digitalen Kompetenzen in Bereich der Lernerorientierung standen in signifikant positivem Zusammenhang mit dem selbstberichteten Einsatz digitaler Medien zur selbstberichteten Umsetzung qualitätsvollen Unterrichtshandelns (Klassenführung, kognitive Aktivierung und konstruktive Unterstützung).
Studie 3 führt die Themenfelder der Fortbildungen und der Kompetenzen im digitalen Kontext zusammen und befasst sich mit dem Zusammenhang zwischen Fortbildungsthemen und digitalen Kompetenzen. Ergebnisse von Pfadmodellierungen zeigen, dass die Teilnahme an digitalbezogenen Fortbildungen zu den technologisch-pädagogisch-inhaltlichen Themen „Computergestützte Förderung der Schüler*innen“ und „Fachspezifische Unterrichtsentwicklung mit digitalen Medien“ mit dem selbstberichteten qualitätsvollen Einsatz digitaler Medien zur kognitiven Aktivierung und konstruktiven Unterstützung einhergehen. Diese Befunde stärken die Annahme, dass Lehrkräfte für einen qualitätsvollen Einsatz digitaler Medien sowohl technologische als auch pädagogisch didaktische Kompetenzen benötigen (Lipowsky & Rzejak, 2021; Mishra & Koehler, 2006; Scheiter & Lachner, 2019) und Fortbildungen folglich technologische mit unterrichtspraktischen Inhalten kombinieren sollten (Bonnes et al., 2022). Zudem zeigt die Studie basierend auf den theoretischen Annahmen von Kunter et al. (2011), dass selbsteingeschätzte digitale Kompetenzen von Lehrkräften die Zusammenhänge zwischen der Teilnahmehäufigkeit an digitalbezogenen Fortbildungen und der selbstberichteten digitalgestützten Unterrichtsqualität vermittelten.
In der abschließenden Gesamtdiskussion der Dissertation werden die Befunde vor dem Hintergrund des dargelegten Forschungsstandes und hinsichtlich der Forschungslücken diskutiert und auf Grundlage der Befunde der drei Studien forschungs- und praxisrelevante Implikationen abgeleitet.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging.
Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes.
This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes.
Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations.
Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions.
The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification.
I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state.
Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss.
The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions.
The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
The plant cell wall plays several crucial roles during plant development with its integrity acting as key signalling component for growth regulation during biotic and abiotic stresses. Cellulose microfibrils, the principal load-bearing components is the major component of the primary cell wall, whose synthesis is mediated by microtubule-associated CELLULOSE SYNTHASE (CESA) COMPLEXES (CSC). Previous studies have shown that CSC interacting proteins COMPANION OF CELLULOSE SYNTHASE (CC) facilitate sustained cellulose synthesis during salt stress by promoting repolymerization of cortical microtubules. However, our understanding of cellulose synthesis during salt stress remains incomplete.
In this study, a pull-down of CC1 protein led to the identification of a novel interactor, termed LEA-like. Phylogenetic analysis revealed that LEA-like belongs to the LATE EMBRYOGENESIS ABUNDANT (LEA) protein family, specifically to the LEA_2 subgroup, showing a close relationship with the CC proteins. Roots of the double mutants lea-like and its closest homolog emb3135 exhibited hypersensitivity when grown on cellulose synthesis inhibitors. Further analysis of higher-order mutants of lea-like, emb3135, and cesa6 demonstrated a genetic interaction between them indicating a significant role in cellulose synthesis.
Live-cell imaging revealed that both LEA-like and EMB3135 migrated with the CSC at the plasma membrane along microtubule tracks in control and oryzalin-treated conditions which destabilize microtubules, suggesting a tight interaction. Investigation of fluorescently labeled lines of different domains of the LEA-like protein revealed that the N-terminal cytosolic domain of LEA-like colocalizes with microtubules, suggesting a physical association between the two.
Considering the established role of LEA proteins in abiotic stress tolerance, we performed phenotypic analysis of the mutant under various stresses. Growth of double mutants of lea-like and emb3135 on NaCl containing media resulted in swelling of root cell indicating a putative role in salt stress tolerance. Supportive of this the quadruple mutant, lacking LEA-like, EMB3135, CC1, and CC2 proteins, exhibited a severe root growth defect on NaCl media compared to control conditions. Live-cell imaging revealed that under salt stress, the LEA-like protein forms aggregates in the plasma membrane.
In conclusion, this study has unveiled two novel interactors of the CSC that act with the CC proteins that regulate plant growth in response to salt stress providing new insights into the intricate regulation of cellulose synthesis, particularly under such conditions.
Leaves exhibit cells with varying degrees of shape complexity along the proximodistal axis. Heterogeneities in growth directions within individual cells bring about such complexity in cell shape. Highly complex and interconnected gene regulatory networks and signaling pathways have been identified to govern these processes. In addition, the organization of cytoskeletal networks and cell wall mechanical properties greatly influences the regulation of cell shape. Research has shown that microtubules are involved in regulating cellulose deposition and direc-tion of cell growth. However, comprehensive analysis of the regulation of the actin cytoskele-ton in cell shape regulation has not been well studied.
This thesis provides evidence that actin regulates aspects of cell growth, division, and direction-al expansion that impacts morphogenesis of developing leaves. The jigsaw puzzle piece mor-phology of epidermal pavement cells further serves as an ideal system to investigate the com-plex process of morphogenetic processes occurring at the cellular level. Here we have em-ployed live cell based imaging studies to track the development of pavement cells in actin com-promised conditions. Genetic perturbation of two predominantly expressed vegetative actin genes ACTIN2 and ACTIN7 results in delayed emergence of the cellular protrusions in pave-ment cells. Perturbation of actin also impacted the organization of microtubule in these cells that is known to promote emergence of cellular protrusions. Further, live-cell imaging of actin or-ganization revealed a correlation with cell shape, suggesting that actin plays a role in influencing pavement cell morphogenesis.
In addition, disruption of actin leads to an increase in cell size along the leaf midrib, with cells being highly anisotropic due to reduced cell division. The reduction of cell division further im-pacted the morphology of the entire leaf, with the mutant leaves being more curved. These re-sults suggests that actin plays a pivotal role in regulating morphogenesis at the cellular and tis-sue scales thereby providing valuable insights into the role of the actin cytoskeleton in plant morphogenesis.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
Die Zusammenarbeit zwischen Lehr- und anderen Fachkräften stellt in Modellen inklusiver Schul- und Unterrichtsentwicklung sowie Schuleffektivität ein wichtiges Element dar. Wenngleich Kooperation als bedeutsam postuliert wird, so belegen Studien, dass diese bisher überwiegend in autonomieerhaltenden Formen praktiziert wird. Als entwicklungsförderlich gelten jedoch v.a. komplexere Formen der Zusammenarbeit. Vor dem Hintergrund inklusiver Bildung und dem Anspruch einer bestmöglichen individuellen Entwicklung der Schüler*innen stellt die Zusammenarbeit von Lehr- und Fachkräften folglich ein sehr bedeutsames Thema dar. Es ist zu hinterfragen, wie sich die Zusammenarbeit zwischen Lehr- und Fachkräften im Primar- wie Sekundarstufenbereich an inklusiven Schulen gestaltet, welche Faktoren diese beeinflussen und welche Relevanz die unterschiedlichen Formen der Zusammenarbeit im Prozess inklusiver Schulentwicklung einnehmen. Bestehende Forschungsdesiderata aufgrei-fend, fokussiert die vorliegende Dissertation auf die realisierte Zusammenarbeit von Lehr- und Fachkräften im Primar- und Sekundarstufenbereich inklusiver Schulen, am Beispiel des Landes Brandenburg. Neben den realisierten Formen der Zusammenarbeit, stehen insbesondere die Identifikation von Kooperationsmustern von Lehr- und Fachkräften sowie von Schulen, und deren Zusammenhänge mit der Leistungsentwicklung von Schüler*innen im Kern des Forschungsinteresses.
Die vorliegende Dissertation bearbeitet insgesamt sechs Forschungsfragen, welche in drei Teilstudien adressiert werden: Zunächst werden mittels deskriptiver Analysen sowie Mehrebenenmodellierungen die Ausgangslage multiprofessioneller Kooperation (erste Forschungsfrage) sowie deren Rahmenbedingungen (zweite Forschungsfrage) im Primar- wie Sekundarstufenbereich erfasst (Teilstudie 1). Lehr- und Fachkräfte kooperierten überwiegend in autonomieerhaltenden, austauschbasierten Formen. Weiterhin zeigte sich, dass insbesondere die individuelle Offenheit zur Zusammenarbeit sowie die subjektiv wahrgenommene Unterstützung durch die Schulleitung bedeutsame Faktoren für die Realisierung multiprofessioneller Kooperation darstellten. Die Fragestellungen drei und vier befassen sich mit der Identifikation von Mustern im Kooperationsverhalten (Teilstudie 2). Zum einen geht es hierbei um personenbezogene Profile von Lehr- und Fachkräften (dritte Forschungsfrage), zum anderen um schulbezogene Profile (vierte Forschungsfrage), welche mittels des personenzentrierten Ansatzes der latenten Profilanalysen unter Berücksichtigung der Mehrebenenstruktur identifiziert werden. Hinsichtlich des individuellen Kooperationsverhaltens konnten vier Profile eruiert werden, bzgl. des schulspezifischen Kooperationsverhaltens drei. Die Mehrheit der Lehr- und Fachkräfte konnte im „regularly“-Profil verortet werden, d.h. nach eigener Einschätzung kooperierten diese überdurchschnittlich häufig im Austausch miteinander und arbeitsteilig, aber auch regelmäßig kokonstruktiv. Auf Schulebene zeigte sich, dass etwa jede zweite inklusive Schule im Land Brandenburg über eine hoch ausgeprägte Kooperationskultur verfügte. Im Fokus der Teilstudie 3 wird den Fragen nachgegangen, in welchem Zusammenhang die schulspezifischen Kooperationskulturen mit der Leistungsentwicklung von Schüler*innen in der Primar- wie Sekundarstufe steht. Mittels autoregressiver Mehrebenenanalysen wird einerseits der Zusammenhang mit der Leistungsentwicklung aller Schüler*innen (fünfte Forschungsfrage) untersucht, sowie spezifisch auf die Entwicklung von Schüler*innen mit und ohne sonderpädagogischem Förderbedarf (sechste Forschungsfrage) fokussiert. Ein zentrales Ergebnis war hierbei, dass Schüler*innen mit sonderpädagogischem Förderbedarf in der Primar- wie Sekundarstufe in ihrer Leistungsentwicklung am stärksten profitierten, wenn sie an Schulen lernten, an denen sich die Lehr- und Fachkräfte sehr regelmäßig über Lernstände der Schüler*innen austauschten (Austausch), Arbeitspakete für differenzierte Lernangebote erarbeiteten und verteilten (Arbeitsteilung) und darüber hinaus gelegentlich gemeinsam Problemlösungen entwickelten (Kokonstruktion).
Die Ergebnisse werden vor dem Hintergrund der postulierten Relevanz multiprofessioneller Kooperation für inklusive Schul- und Unterrichtsentwicklungsprozesse eingeordnet und diskutiert. Weiterhin werden verschiedene praktische Implikationen für die Unterstützung multiprofessioneller Zusammenarbeit im Primar- und Sekundarstufenbereich abgeleitet.
Die vorliegende Arbeit thematisiert die Synthese und die Polymerisation von Monomeren auf der Basis nachwachsender Rohstoffe wie zum Beispiel in Gewürzen und ätherischen Ölen enthaltenen kommerziell verfügbaren Phenylpropanoiden (Eugenol, Isoeugenol, Zimtalkohol, Anethol und Estragol) und des Terpenoids Myrtenol sowie ausgehend von der Rinde einer Birke (Betula pendula) und der Korkeiche (Quercus suber). Ausgewählte Phenylpropanoide (Eugenol, Isoeugenol und Zimtalkohol) und das Terpenoid Myrtenol wurden zunächst in den jeweiligen Laurylester überführt und anschließend das olefinische Strukturelement epoxidiert, wobei 4 neue (2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat, 2-Methoxy-4-(3-methyl-oxiran-2-yl)phenyldodecanoat, (3-Phenyloxiran-2-yl)methyldodecanoat, (7,7-Dimethyl-3-oxatricyclo[4.1.1.02,4]octan-2-yl)methyldodecanoat) und 2 bereits bekannte monofunktionelle Epoxide (2-(4-Methoxybenzyl)oxiran und 2-(4-Methoxyphenyl)-3-methyloxiran) erhalten wurden, die mittels 1H-NMR-, 13C-NMR- und FT-IR-Spektroskopie sowie mit DSC untersucht wurden. Die Photo-DSC Untersuchung der Epoxidmonomere in einer kationischen Photopolymerisation bei 40 °C ergab die maximale Polymerisationsgeschwindigkeit (Rpmax: 0,005 s-1 bis 0,038 s-1) sowie die Zeit (tmax: 13 s bis 26 s) bis zum Erreichen des Rpmax-Wertes und führte zu flüssigen Oligomeren, deren zahlenmittlerer Polymerisationsgrad mit 3 bis 6 mittels GPC bestimmt wurde. Die Umsetzung von 2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat mit Methacrylsäure ergab ein Isomerengemisch (2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat), das mittels Photo-DSC in einer freien radikalischen Photopolymerisation untersucht wurde (Rpmax: 0,105 s-1 und tmax: 5 s), die zu festen in Chloroform unlöslichen Polymeren führte.
Aus Korkpulver und gemahlener Birkenrinde wurden selektiv 2 kristalline ω-Hydroxyfettsäuren (9,10-Epoxy-18-hydroxyoctadecansäure und 22-Hydroxydocosansäure) isoliert. Die kationische Photopolymerisation der 9,10-Epoxy-18-hydroxyoctadecansäure ergab einen nahezu farblosen transparenten und bei Raumtemperatur elastischen Film, welcher ein Anwendungspotential für Oberflächenbeschichtungen hat. Aus der Reaktion von 9,10-Epoxy-18-hydroxyoctadecansäure mit Methacrylsäure wurde ein bei Raumtemperatur flüssiges Gemisch aus zwei Konstitutionsisomeren (9,18-Dihydroxy-10-(methacryloyloxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure) erhalten (Tg: -60 °C). Die radikalische Photopolymerisation dieser Konstitutionsisomere wurde ebenfalls mittels Photo-DSC untersucht (Rpmax: 0,098 s-1 und tmax: 3,8 s). Die Reaktion von 22-Hydroxydocosansäure mit Methacryloylchlorid ergab die kristalline 22-(Methacryloyloxy)docosansäure, welche ebenfalls in einer radikalischen Photopolymerisation mittels Photo-DSC untersucht wurde (Rpmax: 0,023 s-1 und tmax: 9,6 s).
Die mittels AIBN in Dimethylsulfoxid initiierte Homopolymerisation der 22-(Methacryloyloxy)docosansäure und der Isomerengemische bestehend aus 2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat sowie aus 9,18-Dihydroxy-10-(methacryloy-loxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure ergab feste lösliche Polymere, die mittels 1H-NMR- und FT-IR-Spektroskopie, GPC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Pn = 94) und DSC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Tg: 52 °C; Poly(9,18-dihydroxy-10-(methacryloyloxy)-octadecansäure / 9-(methacryloyloxy)-10,18-dihydroxyoctadecansäure): Tg: 10 °C; Poly(22-(methacryloyloxy)docosansäure): Tm: 74,1 °C, wobei der Schmelzpunkt mit dem des Photopolymers (Tm = 76,8 °C) vergleichbar ist) charakterisiert wurden.
Das bereits bekannte Monomer 4-(4-Methacryloyloxyphenyl)butan-2-on wurde ausgehend von 4-(4-Hydroxyphenyl)butan-2-on hergestellt, welches aus Birkenrinde gewonnen werden kann, und unter identischen Bedingungen für einen Vergleich mit den neuen Monomeren polymerisiert. Die freie radikalische Polymerisation führte zu Poly(4-(4-methacryloyloxyphenyl)butan-2-on) (Pn: 214 und Tg: 83 °C). Neben der Homopolymerisation wurde eine statistische Copolymerisation des Isomerengemisches 2-Methoxy-4-(2-hydroxy-3-(methacryl-oyloxy)propyl)phenyldodecanoat / 2-Methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)-phenyldodecanoat mit 4-(4-Methacryloyloxyphenyl)butan-2-on untersucht, wobei ein äquimolarer Einsatz der Ausgangsmonomere zu einem Anstieg der Ausbeute, der Molmassenverteilung und der Dispersität des Copolymers (Tg: 44 °C) führte. Die unter Verwendung von Diethylcarbonat als „grünes“ Lösungsmittel mittels AIBN initiierten freien radikalischen Homopolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on und von Laurylmethacrylat ergaben vergleichbare Polymerisationsgrade der Homopolymere (Pn: 150), welche jedoch aufgrund ihrer Strukturunterschiede deutlich unterschiedliche Glasübergangstemperaturen hatten (Poly(4-(4-methacryloyloxyphenyl)butan-2-on): Tg: 70 °C, Poly(laurylmethacrylat) Tg: -49 °C. Eine statistische Copolymerisation äquimolarer Stoffmengen der beiden Monomere in Diethylcarbonat führte bei einer Polymerisationszeit von 60 Minuten zu einem leicht bevorzugten Einbau des 4-(4-Methacryloyloxyphenyl)butan-2-on in das Copolymer (Tg: 17 °C). Copolymerisationsdiagramme für die freien radikalischen Copolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on mit n-Butylmethacrylat beziehungsweise 2-(Dimethylamino)ethylmethacrylat (t: 20 min bis 60 min; Molenbrüche (X) für 4-(4-Methacryloyloxyphenyl)butan-2-on: 0,2; 0,4; 0,6 und 0,8) zeigten ein nahezu ideales azeotropes Copolymerisationsverhalten, obwohl ein leicht bevorzugter Einbau von 4-(4-Methacryloyloxyphenyl)butan-2-on in das jeweilige Copolymer beobachtet wurde. Dabei korreliert ein Anstieg der Ausbeute und der Glasübergangstemperatur der erhaltenen Copolymere mit einem zunehmenden Gehalt an 4-(4-Methacryloyloxyphenyl)butan-2-on im Reaktionsgemisch. Die unter Einsatz der modifizierten Gibbs-DiMarzio-Gleichung berechneten Glasübergangstemperaturen der Copolymere stimmten mit den gemessenen Werten gut überein. Das ist eine gute Ausgangsbasis für die Bestimmung der Glasübergangstemperatur eines Copolymers mit einer beliebigen Zusammensetzung.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
How do the rights of same-sex couples have to be ensured by states, and which kind of environmental obligations are induced by the right to life and to personal integrity? Questions as diverse and far-reaching as these are regularly dealt with by the Inter-American Court of Human Rights in its advisory function. This book is the first comprehensive, non-Spanish-written treatise on the advisory function of this Court. It analyzes the scope of the Court's advisory jurisdiction and its procedural practice in comparison with that of other international courts. Moreover, the legal effects of the Court’s advisory opinions and the question when the Court should better reject a request for an advisory opinion are examined.
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
Legitimiertes Unrecht
(2024)
Das Oberste Gericht der DDR war integraler Bestandteil der sozialistischen Staatsführung und unterlag strengen Denk- und Organisationsstrukturen. Es war eng in die politische Agenda der SED eingebunden und genoss keinerlei Unabhängigkeit. Die Auslegung des DDR-Rechts durch das Gericht orientierte sich ausschließlich an den innen- und außenpolitischen Interessen der SED. Dies galt auch für die Rechtsprechung in Fällen der Republikflucht und ihrer gesetzlichen Vorläufer. Die höchste Gerichtsinstanz im Staat war aktiv an der Gestaltung und Umsetzung der Strafjustiz gegen Republikflüchtige beteiligt, was wesentlich zur Festigung der Herrschaftsgewalt der SED beitrug. Die vorliegende Untersuchung analysiert Urteile des Obersten Gerichts im historisch-politischen Kontext und zeigt auf, dass die Urteilspraxis ausschließlich im Interesse parteipolitischer Ziele handelte und weder dem Volk noch der eigentlichen Rechtsfindung verpflichtet war. Des Weiteren wird der maßgebliche Beitrag des Obersten Gerichts an der schrittweisen Kriminalisierung der Bürger der DDR beleuchtet. Dies wirft ein kritisches Licht auf die Rolle des Rechtssystems bei der Sicherung von Rechtsstaatlichkeit und Menschenrechten in autoritären Regimen.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
This thesis presents a comprehensive exploration of the application of DNA origami nanofork antennas (DONAs) in the field of spectroscopy, with a particular focus on the structural analysis of Cytochrome C (CytC) at the single-molecule level. The research encapsulates the design, optimization, and application of DONAs in enhancing the sensitivity and specificity of Raman spectroscopy, thereby offering new insights into protein structures and interactions.
The initial phase of the study involved the meticulous optimization of DNA origami structures. This process was pivotal in developing nanoscale tools that could significantly enhance the capabilities of Raman spectroscopy. The optimized DNA origami nanoforks, in both dimer and aggregate forms, demonstrated an enhanced ability to detect and analyze molecular vibrations, contributing to a more nuanced understanding of protein dynamics.
A key aspect of this research was the comparative analysis between the dimer and aggregate forms of DONAs. This comparison revealed that while both configurations effectively identified oxidation and spin states of CytC, the aggregate form offered a broader range of detectable molecular states due to its prolonged signal emission and increased number of molecules. This extended duration of signal emission in the aggregates was attributed to the collective hotspot area, enhancing overall signal stability and sensitivity.
Furthermore, the study delved into the analysis of the Amide III band using the DONA system. Observations included a transient shift in the Amide III band's frequency, suggesting dynamic alterations in the secondary structure of CytC. These shifts, indicative of transitions between different protein structures, were crucial in understanding the protein’s functional mechanisms and interactions.
The research presented in this thesis not only contributes significantly to the field of spectroscopy but also illustrates the potential of interdisciplinary approaches in biosensing. The use of DNA origami-based systems in spectroscopy has opened new avenues for research, offering a detailed and comprehensive understanding of protein structures and interactions. The insights gained from this research are expected to have lasting implications in scientific fields ranging from drug development to the study of complex biochemical pathways. This thesis thus stands as a testament to the power of integrating nanotechnology, biochemistry, and spectroscopic techniques in addressing complex scientific questions.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
Diglossic translanguaging
(2024)
This book examines how German-speaking Jews living in Berlin make sense and make use of their multilingual repertoire. With a focus on lexical variation, the book demonstrates how speakers integrate Yiddish and Hebrew elements into German for indexing belonging and for positioning themselves within the Jewish community. Linguistic choices are shaped by language ideologies (e.g., authenticity, prescriptivism, nostalgia). Speakers translanguage when using their multilingual repertoire, but do so in a diglossic way, using elements from different languages for specific domains
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Mindful Eating
(2024)
Maladaptive eating behaviors such as emotional eating, external eating, and loss-of-control eating are widespread in the general population. Moreover, they are associated to adverse health outcomes and well-known for their role in the development and maintenance of eating disorders and obesity (i.e., eating and weight disorders). Eating and weight disorders are associated with crucial burden for individuals as well as high costs for society in general. At the same time, corresponding treatments yield poor outcomes. Thus, innovative concepts are needed to improve prevention and treatment of these conditions.
The Buddhist concept of mindfulness (i.e., paying attention to the present moment without judgement) and its delivery via mindfulness-based intervention programs (MBPs) has gained wide popularity in the area of maladaptive eating behaviors and associated eating and weight disorders over the last two decades. Though previous findings on their effects seem promising, the current assessment of mindfulness and its mere application via multi-component MBPs hampers to draw conclusions on the extent to which mindfulness-immanent qualities actually account for the effects (e.g., the modification of maladaptive eating behaviors). However, this knowledge is pivotal for interpreting previous effects correctly and for avoiding to cause harm in particularly vulnerable groups such as those with eating and weight disorders.
To address these shortcomings, recent research has focused on the context-specific approach of mindful eating (ME) to investigate underlying mechanisms of action. ME can be considered a subdomain of generic mindfulness describing it specifically in relation to the process of eating and associated feelings, thoughts, and motives, thus including a variety of different attitudes and behaviors. However, there is no universal operationalization and the current assessment of ME suffers from different limitations. Specifically, current measurement instruments are not suited for a comprehensive assessment of the multiple facets of the construct that are currently discussed as important in the literature. This in turn hampers comparisons of different ME facets which would allow to evaluate their particular effect on maladaptive eating behaviors. This knowledge is needed to tailor prevention and treatment of associated eating and weight disorders properly and to explore potential underlying mechanisms of action which have so far been proposed mainly on theoretical grounds.
The dissertation at hand aims to provide evidence-based fundamental research that contributes to our understanding of how mindfulness, more specifically its context-specific form of ME, impacts maladaptive eating behaviors and, consequently, how it could be used appropriately to enrich the current prevention and treatment approaches for eating and weight disorders in the future.
Specifically, in this thesis, three scientific manuscripts applying several qualitative and quantitative techniques in four sequential studies are presented. These manuscripts were published in or submitted to three scientific peer-reviewed journals to shed light on the following questions:
I. How can ME be measured comprehensively and in a reliable and valid way to advance the understanding of how mindfulness works in the context of eating?
II. Does the context-specific construct of ME have an advantage over the generic concept in advancing the understanding of how mindfulness is related to maladaptive eating behaviors?
III. Which ME facets are particularly useful in explaining maladaptive eating behaviors?
IV. Does training a particular ME facet result in changes in maladaptive eating behaviors?
To answer the first research question (Paper 1), a multi-method approach using three subsequent studies was applied to develop and validate a comprehensive self-report instrument to assess the multidimensional construct of ME - the Mindful Eating Inventory (MEI). Study 1 aimed to create an initial version of the MEI by following a three-step approach: First, a comprehensive item pool was compiled by including selected and adapted items of the existing ME questionnaires and supplementing them with items derived from an extensive literature review. Second, the preliminary item pool was complemented and checked for content validity by experts in the field of eating behavior and/or mindfulness (N = 15). Third, the item pool was further refined through qualitative methods: Three focus groups comprising laypersons (N = 16) were used as a check for applicability. Subsequently, think-aloud protocols (N = 10) served as a last check of comprehensibility and elimination of ambiguities.
The resulting initial MEI version was tested in Study 2 in an online convenience sample (N = 828) to explore its factor structure using exploratory factor analysis (EFA). Results were used to shorten the questionnaire in accordance with qualitative and quantitative criteria yielding the final MEI version which encompasses 30 items. These items were assigned to seven ME facets: (1) ‘Accepting and Non-attached Attitude towards one’s own eating experience’ (ANA), (2) ‘Awareness of Senses while Eating’ (ASE), (3) ‘Eating in Response to awareness of Fullness‘ (ERF), (4) ‘Awareness of eating Triggers and Motives’ (ATM), (5) ‘Interconnectedness’ (CON), (6) ‘Non-Reactive Stance’ (NRS) and (7) Focused Attention on Eating’ (FAE).
Study 3 sought to confirm the found facets and the corresponding factor structure in an independent online convenience sample (N = 612) using confirmatory factor analysis (CFA). The study served as further indication of the assumed multidimensionality of ME (the correlational seven-factor model was shown to be superior to a single-factor model). Psychometric properties of the MEI, regarding factorial validity, internal consistency, retest-reliability, and observed criterion validity using a wide range of eating-specific and general health-related outcomes, showed the inventory to be suitable for a comprehensive, reliable and valid assessment of ME. These findings were complemented by demonstrating measurement invariance of the MEI regarding gender. In accordance with the factor structure of the MEI, Paper 1 offers an empirically-derived definition of ME, succeeding in overcoming ambiguities and problems of previous attempts at defining the construct.
To answer the second and third research questions (Paper 2) a subsample of Study 2 from the MEI validation studies (N = 292) was analyzed. Incremental validity of ME beyond generic mindfulness was shown using hierarchical regression models concerning the outcome variables of maladaptive eating behaviors (emotional eating and uncontrolled eating) and nutrition behaviors (consumption of energy-dense food). Multiple regression analyses were applied to investigate the impact of the seven different ME facets (identified in Paper 1) on the same outcome variables. The following ME facets significantly contributed to explaining variance in maladaptive eating and nutrition behaviors: Accepting and Non-attached Attitude towards one`s own eating experience (ANA), Eating in Response to awareness of Fullness (ERF), the Awareness of eating Triggers and Motives (ATM), and a Non-Reactive Stance (NRS, i.e., an observing, non-impulsive attitude towards eating triggers). Results suggest that these ME facets are promising variables to consider when a) investigating potential underlying mechanisms of mindfulness and MBPs in the context of eating and b) addressing maladaptive eating behaviors in general as well as in the prevention and treatment of eating and weight disorders.
To answer the fourth research question (Paper 3), a training on an isolated exercise (‘9 Hunger’) based on the previously identified ME facet ATM was designed to explore its particular association with changes in maladaptive eating behaviors and thus to preliminary explore one possible mechanism of action. The online study was realized using a randomized controlled trial (RCT) design. Latent Change Scores (LCS) across three measurement points (before the training, directly after the training and three months later) were compared between the intervention group (n = 211) and a waitlist control group (n = 188). Short- and longer-term effects of the training could be shown on maladaptive eating behaviors (emotional eating, external eating, loss-of-control eating) and associated outcomes (intuitive eating, ME, self-compassion, well-being). Findings serve as preliminary empirical evidence that MBPs might influence maladaptive eating behaviors through an enhanced non-judgmental awareness of and distinguishment between eating motives and triggers (i.e., ATM). This mechanism of action had previously only been hypothesized from a theoretical perspective. Since maladaptive eating behaviors are associated with eating and weight disorders, the findings can enhance our understanding of the general effects of MBPs on these conditions.
The integration of the different findings leads to several suggestions of how ME might enrich different kinds of future interventions on maladaptive eating behaviors to improve health in general or the prevention and treatment of eating and weight disorders in particular. Strengths of the thesis (e.g., deliberate specific methodology, variety of designs and methods, high number of participants) are emphasized. The main limitations particularly regarding sample characteristics (e.g., higher level of formal education, fewer males, self-selected) are discussed to arrive at an outline for future studies (e.g., including multi-modal-multi-method approaches, clinical eating disorder samples and youth samples) to improve upcoming research on ME and underlying mechanisms of action of MBPs for maladaptive eating behaviors and associated eating and weight disorders.
This thesis enriches current research on mindfulness in the context of eating by providing fundamental research on the core of the ME construct. Thereby it delivers a reliable and valid instrument to comprehensively assess ME in future studies as well as an operational definition of the construct. Findings on ME facet level might inform upcoming research and practice on how to address maladaptive eating behaviors appropriately in interventions. The ME skill ‘Awareness of eating Triggers and Motives (ATM)’ as one particular mechanism of action should be further investigated in representative community and specific clinical samples to examine the validity of the results in these groups and to justify an application of the concept to the general population as well as to subgroups with eating and weight disorders in particular.
In conclusion, findings of the current thesis can be used to set future research on mindfulness, more specifically ME, and its underlying mechanism in the context of eating on a more evidence-based footing. This knowledge can inform upcoming prevention and treatment to tailor MBPs on maladaptive eating behaviors and associated eating and weight disorders appropriately.
Heat stress (HS) is a major abiotic stress that negatively affects plant growth and productivity. However, plants have developed various adaptive mechanisms to cope with HS, including the acquisition and maintenance of thermotolerance, which allows them to respond more effectively to subsequent stress episodes. HS memory includes type II transcriptional memory which is characterized by enhanced re-induction of a subset of HS memory genes upon recurrent HS. In this study, new regulators of HS memory in A. thaliana were identified through the characterization of rein mutants.
The rein1 mutant carries a premature stop in CYCLIN-DEPENDENT-KINASE 8 (CDK8) which is part of the cyclin kinase module of the Mediator complex. Rein1 seedlings show impaired type II transcriptional memory in multiple heat-responsive genes upon re-exposure to HS. Additionally, the mutants exhibit a significant deficiency in HS memory at the physiological level. Interaction studies conducted in this work indicate that CDK8 associates with the memory HEAT SHOCK FACTORs HSAF2 and HSFA3. The results suggest that CDK8 plays a crucial role in HS memory in plants together with other memory HSFs, which may be potential targets of the CDK8 kinase function. Understanding the role and interaction network of the Mediator complex during HS-induced transcriptional memory will be an exciting aspect of future HS memory research.
The second characterized mutant, rein2, was selected based on its strongly impaired pAPX2::LUC re-induction phenotype. In gene expression analysis, the mutant revealed additional defects in the initial induction of HS memory genes. Along with this observation, basal thermotolerance was impaired similarly as HS memory at the physiological level in rein2. Sequencing of backcrossed bulk segregants with subsequent fine mapping narrowed the location of REIN2 to a 1 Mb region on chromosome 1. This interval contains the At1g65440 gene, which encodes the histone chaperone SPT6L. SPT6L interacts with chromatin remodelers and bridges them to the transcription machinery to regulate nucleosome and Pol II occupancy around the transcriptional start site. The EMS-induced missense mutation in SPT6L may cause altered HS-induced gene expression in rein2, possibly triggered by changes in the chromatin environment resulting from altered histone chaperone function.
Expanding research on screen-derived factors that modify type II transcriptional memory has the potential to enhance our understanding of HS memory in plants. Discovering connections between previously identified memory factors will help to elucidate the underlying network of HS memory. This knowledge can initiate new approaches to improve heat resilience in crops.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
In der DDR sollte die Rechtsprechung den Zielen der Politik und dem Aufbau sowie der Sicherung des Sozialismus dienen. Zur Verwirklichung dieser Ziele unternahm das SED-Regime insbesondere den Versuch, auf die Ausbildung des juristischen Nachwuchses Einfluss zu nehmen. Die Arbeit untersucht anhand der im Bundesarchiv verwahrten Originalquellen die Anforderungen, die an das juristische Studium in der DDR gestellt wurden, und die Umstände, unter denen die juristische Ausbildung erfolgte. Unter besonderer Berücksichtigung der Auswahl, Aus- und Weiterbildung der Staatsanwälte beleuchtet die Arbeit die sog. »Kaderarbeit« der DDR-Justiz sowie die wesentlichen Zulassungs-, Prüfungs-, Studien- und Weiterbildungsbedingungen. Die Auswertung des überlieferten Archivmaterials führt zu der Erkenntnis, dass die Aus- und Weiterbildung der DDR-Juristen zur Sicherstellung der Ziele der sozialistischen Partei durch eine planmäßige und systematische politisch-ideologische Erziehung und Kontrolle bestimmt war.
Aging is associated with bone loss, which can lead to osteoporosis and high fracture risk. This coincides with the enhanced formation of bone marrow adipose tissue (BMAT), suggesting a negative effect of bone marrow adipocytes on skeletal health. Increased BMAT formation is also observed in pathologies such as obesity, type 2 diabetes and osteoporosis. However, a subset of bone marrow adipocytes forming the constitutive BMAT (cBMAT), arise early in life in the distal skeleton, contain high levels of unsaturated fatty acids and are thought to provide a physiological function. Regulated BMAT (rBMAT) forms during aging and obesity in proximal regions of the bone and contain a large proportion of saturated fatty acids. Paradoxically, BMAT accumulation is also enhanced during caloric restriction (CR), a life-span extending dietary intervention. This indicates, that different types of BMAT can form in response to opposing nutritional stimuli with potentially different functions.
To this end, two types of nutritional interventions, CR and high fat diet (HFD), that are both described to induce BMAT accumulation were carried out. CR markedly increased BMAT formation in the proximal tibia and led to a higher proportion of unsaturated fatty acids, making it similar to the physiological cBMAT. Additionally, proximal and diaphyseal tibia regions displayed higher adiponectin expression. In aged mice, CR was associated with an improved trabecular bone structure. Taken together, these findings demonstrate, that the type of BMAT that forms during CR might provide beneficial effects for local bone stem/progenitor cells and metabolic health. The HFD intervention performed in this thesis showed no effect on BMAT accumulation and bone microstructure. RNA Seq analysis revealed alterations in the composition of the collagen-containing extracellular matrix (ECM).
In order to investigate the effects of glucose homeostasis on osteogenesis, differentiation capacity of immortalized multipotent mesenchymal stromal cells (MSCs) and osteochondrogenic progenitor cells (OPCs) was analyzed. Insulin improved differentiation in both cell types, however, combination of with a high glucose concentration led to an impaired mineralization of the ECM. In the MSCs, this was accompanied by the formation of adipocytes, indicating negative effects of the adipocytes formed during hyperglycemic conditions on mineralization processes. However, the altered mineralization pattern and structure of the ECM was also observed in OPCs, which did not form any adipocytes, suggesting further negative effects of a hyperglycemic environment on osteogenic differentiation.
In summary, the work provided in this thesis demonstrated that differentiation commitment of bone-resident stem cells can be altered through nutrient availability, specifically glucose. Surprisingly, both high nutrient supply, e.g. the hyperglycemic cell culture conditions, and low nutrient supply, e.g. CR, can induce adipogenic differentiation. However, while CR-induced adipocyte formation was associated with improved trabecular bone structure, adipocyte formation in a hyperglycemic cell-culture environment hampered mineralization. This thesis provides further evidence for the existence of different types of BMAT with specific functions.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Dieses Buch geht der Frage nach, aus welchen Gründen im Berlin des Jahres 1845 mit der »Genossenschaft für Reform im Judenthum« die womöglich bis heute radikalste Ausprägung jüdischer Reform entstand. Dazu werden die Hauptwerke Sigismund Sterns (1812–1867), des Gründers der Bewegung, erstmals systematisch dargestellt und zeitgeschichtlich eingeordnet. Die Studie macht deutlich, dass die Gründung der Genossenschaft nur im Kontext der vielfältigen, gesamtgesellschaftlichen und innerjüdischen, religiösen und politischen Umwälzungen im Vormärz und deren theoretisch-diskursivem Unterbau verstanden werden kann. Das Aufkommen der Bewegung und das jähe Verklingen ihrer Vitalität nach 1848 erweisen sich dabei als Spiegel der komplexen Verflechtungszusammenhänge deutsch-jüdischen philosophisch-theologischen Denkens im 19. Jahrhundert.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
Ausgehend von der Beobachtung, dass die aktuelle Digitalisierungsforschung die Ambivalenz der Digitalisierung zwar erkennt, aber nicht zum Gegenstand ihrer Analysen macht, fokussiert die vorliegende kumulative Dissertation auf die ambivalente Dichotomie aus Potenzialen und Problemen, die mit digitalen Transformationen von Organisationen einhergeht. Entlang von sechs Publikationen wird mit einem systemtheoretischen Blick auf Organisationen die spannungsvolle Dichotomie hinsichtlich dreier ambivalenter Verhältnisse aufgezeigt: Erstens wird in Bezug auf das Verhältnis von Digitalisierung und Postbürokratie deutlich, dass digitale Transformationen das Potenzial aufweisen, postbürokratische Arbeitsweisen zu erleichtern. Parallel ergibt sich das Problem, dass auf Konsens basierende postbürokratische Strukturen Digitalisierungsinitiativen erschweren, da diese auf eine Vielzahl von Entscheidungen angewiesen sind. Zweitens zeigt sich mit Blick auf das ambivalente Verhältnis von Digitalisierung und Vernetzung, dass einerseits organisationsweite Kooperation ermöglicht wird, während sich andererseits die Gefahr digitaler Widerspruchskommunikation auftut. Beim dritten Verhältnis zwischen Digitalisierung und Gender deutet sich das mit neuen digitalen Technologien einhergehende Potenzial für Gender Inklusion an, während zugleich das Problem einprogrammierter Gender Biases auftritt, die Diskriminierungen oftmals verschärfen. Durch die Gegenüberstellung der Potenziale und Probleme wird nicht nur die Ambivalenz organisationaler Digitalisierung analysierbar und verständlich, es stellt sich auch heraus, dass mit digitalen Transformationen einen doppelte Formalisierung einhergeht: Organisationen werden nicht nur mit den für Reformen üblichen Anpassungen der formalen Strukturen konfrontiert, sondern müssen zusätzlich formale Entscheidungen zu Technikeinführung und -beibehaltung treffen sowie formale Lösungen etablieren, um auf unvorhergesehene Potenziale und Probleme reagieren. Das Ziel der Dissertation ist es, eine analytisch generalisierte Heuristik an die Hand zu geben, mit deren Hilfe die Errungenschaften und Chancen digitaler Transformationen identifiziert werden können, während sich parallel ihr Verhältnis zu den gleichzeitig entstehenden Herausforderungen und Folgeproblemen erklären lässt.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Krankheitsängste in verschiedenen Populationen und die Effektivität ambulanter Verhaltenstheraphie
(2023)
Diese Arbeit zeigt auf, wie historisch und rechtlich eine Ungleichheit zwischen Schwarzen und Weißen in Deutschland gewachsen ist und geht der Frage nach, welche Anforderungen das Verfassungsrecht, die Rechtspraxis und die Politik erfüllen müssen, um sie auszugleichen.
Eingangs wird die Entwicklung des Verbots der rassischen Diskriminierung im internationalen und nationalen Recht dargelegt. Folglich zeichnet die Verfasserin die Diskriminierungsgeschichte von Schwarzen Menschen nach. Zur Überwindung der nach wie vor bestehenden strukturellen Diskriminierung schlägt sie ein positives Recht vor, das sich auf Menschenrechtsstandards und Lösungsansätzen aus Rechtsvergleichen stützt und die Gleichberechtigung von Schwarzen Menschen bewirken soll.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
Sexualität in der Geschichte
(2024)
Jelena Tomović führt in diesem Band durch die Entwicklungen unserer sexuellen Sprache und Praktiken. Sie zeigt, dass die Art und Weise, wie über Sexualität gesprochen wird, nicht nur ein Spiegelbild, sondern auch ein treibender Faktor für soziale Veränderungen ist. Die Studie stellt die konventionelle Vorstellung von Sexualität in Frage und führt die Lesenden in eine Welt der subtilen Nuancen und kulturellen Veränderungen. Mit kommunikationstheoretischen Ansätzen, dem praxeologischen Ansatz, ihrer sozialkonstruktivistischen Grundannahme und einem klaren Fokus auf Akteur*innen bietet die Autorin eine frische Perspektive auf die Geschichte der Sexualität. Das Buch eröffnet neue Wege für die Erforschung und das Verständnis von Intimität und sozialer Kommunikation.
Cross-sectional associations of dietary biomarker patterns with health and nutritional status
(2024)
Beerdigen oder verbrennen?
(2024)
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
Development of a CRISPR/Cas gene editing technique for the coccolithophore Chrysotila carterae
(2024)
Dentro de la cuenca intermontana de Quito-Guay llabamba de Ecuador, se han identificado y analizado en este estudio, cinco depósitos coluviales inusualmente grandes de antiguos deslizamientos. El gran deslizamiento rotacional MM-5 Guayllabamba es el más extenso, con un volumen de 1183 millones de m3. Las mega avalanchas de escombros MM-1 Conocoto, MM-3 Oyacoto, y MM-4 San Francisco fueron desencadenadas originalmente por una ruptura inicial que estuvo asociada a un deslizamiento rotacional, los depósitos correspondientes tienen volúmenes entre 399 a 317 millones de m3. Finalmente, el depósito de menor volumen, el deslizamiento rotacional y caída de detritos MM-2 Batán, tiene un volumen de 8,7 millones de m3. En esta tesis, se realizó un estudio detallado de estos grandes movimientos en masa utilizando métodos neotectónicos y lito-tefrostratigráficos para comprender las condiciones geológicas y geomorfológicas de contorno que podrían ser relevantes para desencadenar estos movimientos en masa. La parte neotectónica del estudio se basó en el análisis geomorfológico cualitativo y cuantitativo de estos grandes depósitos de movimientos en masa, a través de la caracterización estructural de anticlinales ubicados al este de la subcuenca de Quito y sus flancos colapsados que constituyen las áreas de ruptura. Esta parte del análisis fue además apoyada por la aplicación de diferentes índices morfométricos para revelar procesos de evolución del paisaje forzados tectónicamente que pueden haber contribuido a la generación de movimientos en masa. La parte lito-tefrostratigráfica del estudio se basó en el análisis de las características petrográficas, geoquímicas y geocronológicas de los horizontes del suelo y de las cenizas volcánicas intercaladas, con el objetivo de restringir la cronología de los eventos individuales de movimientos en masa y su posible de correlación. Los resultados se integraron en esquemas cronoestratigráficos utilizando superficies de ruptura, relaciones transversales y de superposición de depósitos de deslizamiento y estratos posteriores para comprender los movimientos en masa en el contexto tectónico y temporal del entorno de la cuenca intermontana, así como para identificar los mecanismos desencadenantes de cada evento. El movimiento en masa MM-5 Guayllabamba es el resultado del colapso de la ladera suroeste del volcán Mojanda y fue desencadenado por la interacción de condiciones geológicas y morfológicas hace aproximadamente 0,81 Ma. El primer episodio de avalancha de escombros de los movimientos en masa MM-3 Oyacoto y MM-4 San Francisco podría estar relacionado con condiciones tanto geológicas como morfológicas, dadas las rocas altamente fracturadas y el levantamiento del anticlinal Bellavista-Catequilla que posteriormente fue inciso al pie de la ladera por la erosión fluvial. Este primer episodio de colapso probablemente ocurrió alrededor de los 0,8 Ma. El movimiento en masa MM-2 Batán posiblemente también fue desencadenado por una combinación de condiciones geológicas y morfológicas, asociadas a una reducción de los esfuerzos litostáticos que afectaron a las formaciones Chiche y Machángara y a un aumento de los esfuerzos de cizalla durante procesos de socavación fluvial lateral en los flancos de las áreas de origen. Esto apunta a un proceso vinculado entre la erosión fluvial y los procesos de levantamiento asociados a la evolución del anticlinal El Batán-La Bota que podría haber ocurrido entre 0,5 y 0,25 Ma. La voluminosa avalancha de escombros MM-1 Conocoto, así como el segundo episodio de avalancha de escombros que generó los movimientos en masa MM-3 Oyacoto y MM-4 San Francisco, fueron provocados por el colapso gravitacional de las formaciones Mojanda y Cangahua que se caracterizan por la intercalación de cenizas volcánicas. La falla del flanco oriental de los anticlinales probablemente estuvo asociada al incremento de la humedad disponible relacionada con las variaciones climáticas regionales del Holoceno. Los resultados de la cronología de los paleosuelos combinados con los datos cronoestratigráficos y paleoclimáticos regionales sugieren que estas avalanchas de escombros se desencadenaron entre 5 y 4 ka.
La tectónica activa ha modelado los rasgos morfológicos de la cuenca intermontana Quito-Guayllabamba. El desencadenamiento de movimientos en masa en este ambiente está asociado a rupturas en litologías del Pleistoceno (sedimentos lacustres, depósitos aluviales y volcánicos) sometidas a procesos de deformación, actividad sísmica y episodios superpuestos de variabilidad climática. El Distrito Metropolitano de Quito es parte integral de este complejo entorno y de las condiciones geológicas, climáticas y topográficas que continúan influyendo en el espacio geográfico urbano dentro de esta cuenca intermontana. La ciudad de Quito comprende el área de mayor consolidación urbana incluyendo las subcuencas de Quito y San Antonio, con una población de 2,872 millones de habitantes, lo que refleja la importancia del estudio de las amenazas geológicas y climáticas inherentes a esta región.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
Die Arbeit „Die Bekämpfung transnationaler Kriminalität im Kontext fragiler Staatlichkeit“ widmet sich dem Phänomen grenzüberschreitend tätiger Akteure der organisierten Kriminalität die den Umstand ausnutzen, dass einige international anerkannte Regierungen nur eine unzureichende Kontrolle über Teile ihres Staatsgebietes ausüben. Es wird untersucht, weshalb der durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, zur Bekämpfung transnationaler Kriminalitätsphänomene im Kontext dieser fragilen Staaten, nicht oder nur defizitär dazu beiträgt die Kriminalitätsphänomene zu bekämpfen.
Nachdem zunächst erörtert wird, was im Rahmen der Untersuchung unter dem Begriff der transnationalen Kriminalität verstanden wird, wird der internationale Rechtsrahmen zur Bekämpfung anhand von fünf beispielhaft ausgewählten transnationalen Kriminalitätsphänomenen beschrieben. Im darauffolgenden Hauptteil der Untersuchung wird der Frage nachgegangen, weshalb dieser durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, gerade in fragilen Staaten, kaum einen Beitrag dazu leistet solchen Kriminalitätsphänomenen effektiv zu begegnen. Dabei wird festgestellt, dass die Genese des internationalen Rechtsrahmens zu einem Legitimitätsdefizit desselbigen führt. Auch die mangelhafte Berücksichtigung der speziellen Lebensrealitäten, die in vielen fragilen Staaten vorzufinden sind, wirkt sich negativ auf die Durchsetzbarkeit des internationalen Rechtsrahmens aus. Es wird dargelegt, dass unterschiedlich hohe menschenrechtliche Schutzstandards zu Normenkollisionen bei der internationalen Zusammenarbeit der Staaten führen, insbesondere im Rahmen der internationalen Rechtshilfe. Da gerade fragile Staaten häufig durch eine defizitäre menschenrechtliche Situation gekennzeichnet sind, stellt dies konsolidierte Staaten im Rahmen der Zusammenarbeit mit fragilen Staaten öfters vor Herausforderungen. Schließlich wird aufgezeigt, dass auch die extraterritoriale Jurisdiktion und somit die strafrechtliche Verfolgung transnationaler Straftaten durch Drittstaaten mit rechtlichen und praktischen Problemen einhergeht.
In einem letzten Kapitel der Arbeit wird der Frage nachgegangen, ob nicht ein alternativer Strafverfolgungsmechanismus geschaffen werden sollte, um transnationale Kriminalitätsphänomene zu verfolgen, die aus fragilen Staaten heraus begangen werden und wie ein solch alternativer Strafverfolgungsmechanismus konkret ausgestaltet sein sollte.
This study focuses on William Faulkner, whose works explore the demise of the slavery-based Old South during the Civil War in a highly experimental narrative style. Central to this investigation is the analysis of the temporal dimensions of both individual and collective guilt, thus offering a new approach to the often-discussed problem of Faulkner’s portrayal of social decay. The thesis examines how Faulkner re-narrates the legacy of the Old South as a guilt narrative and argues that Faulkner uses guilt in order to corroborate his concept of time and the idea of the continuity of the past. The focus of the analysis is on three of Faulkner’s arguably most important novels: The Sound and the Fury, Absalom, Absalom!, and Go Down, Moses. Each of these novels features a main character deeply overwhelmed by the crimes of the past, whether private, familial, or societal. As a result, guilt is explored both from a domestic as well as a social perspective. In order to show how Faulkner blends past and present by means of guilt, this work examines several methods and motifs borrowed from different fields and genres with which Faulkner narratively negotiates guilt. These include religious notions of original sin, the motif of the ancestral curse prevalent in the Southern Gothic genre, and the psychological concept of trauma. Each of these motifs emphasizes the temporal dimensions of guilt, which are the core of this study, and makes clear that guilt in Faulkner’s work is primarily to be understood as a temporal rather than a moral problem.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
Food Neophilie
(2023)
Trotz der eindeutigen Vorteile einer ausgewogenen Ernährung halten sich viele Menschen weltweit nicht an entsprechende Ernährungsrichtlinien. Um angemessene Strategien zur Unterstützung einer gesundheitsfördernden Ernährung zu entwickeln, ist ein Verständnis der zugrunde liegenden Faktoren unerlässlich. Insbesondere die Gruppe der älteren Erwachsenen stellt dabei eine wichtige Zielgruppe für ernährungsbezogene Präventions- und Interventionsansätze dar. Einer der vielen Faktoren, die als Determinanten einer gesundheitsfördernden Ernährung diskutiert werden, ist die Food Neophilie, also die Bereitschaft, neue und unbekannte Lebensmittel auszuprobieren. Aktuelle Forschungsergebnisse legen nahe, dass die Food Neophilie positiv mit einer gesundheitsfördernden Ernährung in Verbindung steht, allerdings ist die bisherige Forschung in diesem Bereich äußerst begrenzt. Das Ziel der Dissertation war es, das Konstrukt der Food Neophilie sowie seine Beziehung zu gesundheitsförderndem Ernährungsverhalten im höheren Erwachsenenalter grundlegend zu erforschen, um das Potenzial der Food Neophilie für die Gesundheitsförderung älterer Erwachsener besser zu verstehen. Dabei wurde im Rahmen der ersten Publikation zunächst untersucht, wie sich das Konstrukt der Food Neophilie reliabel und valide erfassen lässt, um weiterführende Untersuchungen der Food Neophilie zu ermöglichen. Die psychometrische Validierung der deutschen Version der Variety Seeking Tendency Scale (VARSEEK) basierte auf zwei unabhängigen Stichproben mit insgesamt N = 1000 Teilnehmenden und bestätigte, dass es sich bei der Skala um ein reliables und valides Messinstrument zur Erfassung der Food Neophilie handelt. Darauf aufbauend wurde im Rahmen der zweiten Publikation die Beziehung der Food Neophilie und der Ernährungsqualität über die Zeit hinweg analysiert. Die prospektive Untersuchung von N = 960 Teilnehmenden des höheren Erwachsenenalters (M = 63.4 Jahre) anhand einer Cross-Lagged-Panel-Analyse ergab hohe zeitliche Stabilitäten der Food Neophilie und der Ernährungsqualität über einen Zeitraum von drei Jahren. Es zeigte sich zudem ein positiver querschnittlicher Zusammenhang zwischen der Food Neophilie und der Ernährungsqualität, jedoch wurde die Food Neophilie nicht als signifikante Determinante der Ernährungsqualität über die Zeit hinweg identifiziert. In der dritten Publikation wurden schließlich nicht nur die individuellen Auswirkungen der Food Neophilie auf die Ernährungsqualität betrachtet, sondern auch potenzielle dynamische Wechselwirkungen innerhalb von Partnerschaften einbezogen. Hierzu erfolgte mittels eines Actor-Partner-Interdependence-Modells eine Differenzierung potenzieller intra- und interpersoneller Einflüsse der Food Neophilie auf die Ernährungsqualität. Im Rahmen der dyadischen Analyse zeigte sich bei N = 390 heterosexuellen Paaren im höheren Erwachsenenalter (M = 64.0 Jahre) ein Dominanzmuster: Während die Food Neophilie der Frauen positiv mit ihrer eigenen Ernährungsqualität und der ihrer Partner zusammenhing, war die Food Neophilie der Männer nicht mit der Ernährungsqualität des Paares assoziiert. Insgesamt leistet die vorliegende Dissertation einen wertvollen Beitrag zum umfassenden Verständnis der Food Neophilie sowie ihrer Rolle im Kontext der Ernährungsgesundheit älterer Erwachsener. Trotz fehlender Vorhersagekraft über die Zeit hinweg deutet der positive Zusammenhang zwischen Food Neophilie und Ernährungsqualität darauf hin, dass die Fokussierung auf eine positive und neugierige Einstellung gegenüber Lebensmitteln eine innovative Perspektive für Präventions- und Interventionsansätze zur Unterstützung einer gesundheitsfördernden Ernährung älterer Erwachsener bieten könnte.
Die umfangreiche rechtswissenschaftliche Studie befasst sich mit den preußischen Staatskirchenverträgen aus der Zeit der Weimarer Republik. Diese Verträge waren Höhepunkte einer Entwicklung in Richtung größerer Freiheit und Unabhängigkeit der Kirchen vom Staat, die den Vorgängen im Reich und in anderen deutschen Ländern teils entsprach, teils zuwiderlief. Die Entwicklung folgte keiner unverrückbaren Idealvorstellung über das Verhältnis von Staat und Kirche, sondern stellte sich stets als pragmatische Reaktion auf realpolitische Probleme dar. Die Staatskirchenverträge selbst prägten die weiteren Entwicklungen in Ost und West bis zur Gegenwart.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Die vorliegende Arbeit untersucht Urlaubsfotografien bei Facebook und beschreibt, welche sozio-technischen Medienpraktiken sich innerhalb der Social-Media Plattform über die Fotografien vollziehen. Fotografische Praktiken sind durch aktive Handlungen und soziale Gebrauchsweisen bestimmt. Urlaubsfotografien tragen zum Beispiel zur Strukturierung von Reiserouten und Vorstellungen bei, indem genrespezifische Motive und Rahmungen mit Hilfe von Medien reproduziert und wiederholt werden. Praktiken des Zeigens, Teilens und Kommunizierens werden durch Social Plug-Ins (Like/Share Buttons) und Tagging-Funktionen auch in die Benutzeroberflächen von Facebook integriert. Dadurch werden Nutzer*innen Aktivitäten und technische Prozesse miteinander verbunden. Am Beispiel der automatischen Generierung von Urlaubsfotografien auf Geotagseiten wird gezeigt, dass Social-Tagging zur Entstehung und Aushandlung geographischer Räume und Ortsvorstellungen beiträgt. Mithilfe technischer Strukturierungen von Fotografien auf Taggingseiten werden genrespezifische Motive, fotografische Trends und Ästhetiken besonders sichtbar. Allerdings wird ihre Visualisierung auch durch algorithmische Priorisierung einzelner Inhalte mitbestimmt. Dadurch werden Urlaubsfotografien für ein fotografisches Profiling genutzt, da sie das algorithmische Erfassen und Auswerten von Nutzer*innen-Informationen ermöglichen. Die Arbeit zeigt, dass der Einsatz von Bilderkennungsverfahren und fotografischen Datenanalysen zu einer optimierten Informationsgewinnung und zu einer Standardisierung von Fotografien beiträgt.
Watershed management requires an understanding of key hydrochemical processes. The Pra Basin is one of the five major river basins in Ghana with a population of over 4.2 million people. Currently, water resources management faces challenges due to surface water pollution caused by the unregulated release of untreated household and industrial waste into aquatic ecosystems and illegal mining activities. This has increased the need for groundwater as the most reliable water supply. Our understanding of groundwater recharge mechanisms and chemical evolution in the basin has been inadequate, making effective management difficult. Therefore, the main objective of this work is to gain insight into the processes that determine the hydrogeochemical evolution of groundwater quality in the Pra Basin. The combined use of stable isotope, hydrochemistry, and water level data provides the basis for conceptualizing the chemical evolution of groundwater in the Pra Basin. For this purpose, the origin and evaporation rates of water infiltrating into the unsaturated zone were evaluated. In addition, Chloride Mass Balance (CMB) and Water Table Fluctuations (WTF) were considered to quantify groundwater recharge for the basin. Indices such as water quality index (WQI), sodium adsorption ratio (SAR), Wilcox diagram, and salinity (USSL) were used in this study to determine the quality of the resource for use as drinking water and for irrigation purposes. Due to the heterogeneity of the hydrochemical data, the statistical techniques of hierarchical cluster and factor analysis were applied to subdivide the data according to their spatial correlation. A conceptual hydrogeochemical model was developed and subsequently validated by applying combinatorial inverse and reaction pathway-based geochemical models to determine plausible mineral assemblages that control the chemical composition of the groundwater. The interactions between water and rock determine the groundwater quality in the Pra Basin. The results underline that the groundwater is of good quality and can be used for drinking water and irrigation purposes. It was demonstrated that there is a large groundwater potential to meet the entire Pra Basin’s current and future water demands. The main recharge area was identified as the northern zone, while the southern zone is the discharge area. The predominant influence of weathering of silicate minerals plays a key role in the chemical evolution of the groundwater. The work presented here provides fundamental insights into the hydrochemistry of the Pra Basin and provides data important to water managers for informed decision-making in planning and allocating water resources for various purposes. A novel inverse modelling approach was used in this study to identify different mineral compositions that determine the chemical evolution of groundwater in the Pra Basin. This modelling technique has the potential to simulate the composition of groundwater at the basin scale with large hydrochemical heterogeneity, using average water composition to represent established spatial groupings of water chemistry.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Electricity production contributes to a significant share of greenhouse gas emissions in Europe and is thus an important driver of climate change. To fulfil the Paris Agreement, the European Union (EU) needs a rapid transition to a fully decarbonised power production system. Presumably, such a system will be largely based on renewables. So far, many EU countries have supported a shift towards renewables such as solar and wind power using support schemes, but the economic and political context is changing. Renewables are now cheaper than ever before and have become cost-competitive with conventional technologies. Therefore, European policymakers are striving to better integrate renewables into a competitive market and to increase the cost-effectiveness of the expansion of renewables. The first step was to replace previous fixed-price schemes with competitive auctions. In a second step, these auctions have become more technology-open. Finally, some governments may phase out any support for renewables and fully expose them to the competitive power market.
However, such policy changes may be at odds with the need to rapidly expand renewables and meet national targets due to market characteristics and investors’ risk perception. Without support, price risks are higher, and it may be difficult to meet an investor’s income expectations. Furthermore, policy changes across different countries could have unexpected effects if power markets are interconnected and investors able to shift their investments. Finally, in multi-technology auctions, technologies may dominate, which can be a risk for long-term power system reliability. Therefore, in my thesis, I explore the effects of phasing out support policies for renewables, of coordinating these phase-outs across countries, and of using multi-technology designs. I expand the public policy literature about investment behaviour and policy design as well as policy change and coordination, and I further develop an agent-based model.
The main questions of my thesis are what the cost and deployment effects of gradually exposing renewables to market forces would be and how coordination between countries affects investors’ decisions and market prices.. In my three contributions to the academic literature, I use different methods and come to the following results. In the first contribution, I use a conjoint analysis and market simulation to evaluate the effects of phasing out support or reintroducing feed-in tariffs from the perspective of investors. I find that a phase-out leads to investment shifts, either to other still-supported technologies or to other countries that continue to offer support. I conclude that the coordination of policy changes avoids such shifts.. In the second contribution, I integrate the empirically-derived preferences from the first contribution in to an agent-based power system model of two countries to simulate the effects of ending auctions for renewables. I find that this slows the energy transition, and that cross-border effects are relevant. Consequently, continued support is necessary to meet the national renewables targets. In the third contribution, I analyse the outcome of past multi-technology auctions using descriptive statistics, regression analysis as well as case study comparisons. I find that the outcomes are skewed towards single technologies. This cannot be explained by individual design elements of the auctions, but rather results from context-specific and country-specific characteristics. Based on this, I discuss potential implications for long-term power system reliability.
The main conclusions of my thesis are that a complete phase-out of renewables support would slow down the energy transition and thus jeopardize climate targets, and that multi-technology auctions may pose a risk for some countries, especially those that cannot regulate an unbalanced power plant portfolio in the long term. If policymakers decide to continue supporting renewables, they may consider adopting technology-specific auctions to better steer their portfolio. In contrast, if policymakers still want to phase out support, they should coordinate these policy changes with other countries. Otherwise, overall transition costs can be higher, because investment decisions shift to still-supported but more expensive technologies.
Volcanic hydrothermal systems are an integral part of most volcanoes and typically involve a heat source, adequate fluid supply, and fracture or pore systems through which the fluids can circulate within the volcanic edifice. Associated with this are subtle but powerful processes that can significantly influence the evolution of volcanic activity or the stability of the near-surface volcanic system through mechanical weakening, permeability reduction, and sealing of the affected volcanic rock. These processes are well constrained for rock samples by laboratory analyses but are still difficult to extrapolate and evaluate at the scale of an entire volcano. Advances in unmanned aircraft systems (UAS), sensor technology, and photogrammetric processing routines now allow us to image volcanic surfaces at the centimeter scale and thus study volcanic hydrothermal systems in great detail. This thesis aims to explore the potential of UAS approaches for studying the structures, processes, and dynamics of volcanic hydrothermal systems but also to develop methodological approaches to uncover secondary information hidden in the data, capable of indicating spatiotemporal dynamics or potentially critical developments associated with hydrothermal alteration. To accomplish this, the thesis describes the investigation of two near-surface volcanic hydrothermal systems, the El Tatio geyser field in Chile and the fumarole field of La Fossa di Vulcano (Italy), both of which are among the best-studied sites of their kind. Through image analysis, statistical, and spatial analyses we have been able to provide the most detailed structural images of both study sites to date, with new insights into the driving forces of such systems but also revealing new potential controls, which are summarized in conceptual site-specific models. Furthermore, the thesis explores methodological remote sensing approaches to detect, classify and constrain hydrothermal alteration and surface degassing from UAS-derived data, evaluated them by mineralogical and chemical ground-truthing, and compares the alteration pattern with the present-day degassing activity. A significant contribution of the often neglected diffuse degassing activity to the total amount of degassing is revealed and constrains secondary processes and dynamics associated with hydrothermal alteration that lead to potentially critical developments like surface sealing. The results and methods used provide new approaches for alteration research, for the monitoring of degassing and alteration effects, and for thermal monitoring of fumarole fields, with the potential to be incorporated into volcano monitoring routines.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Preisalgorithmenkartelle
(2024)
Mithilfe von Preisalgorithmen sind Unternehmen in der Lage, automatische und wechselseitige Preisanpassungen vorzunehmen. Dadurch können klassische Kartellkonstellationen mangels konspirativer Treffen in den Hintergrund treten. Die Arbeit zeigt auf, unter welchen Voraussetzungen der Einsatz von Preisalgorithmen einen Verstoß gegen das europäische Kartellverbot begründen kann. Dazu werden Fallkonstellationen beleuchtet, die ein algorithmisches Zusammenwirken sowohl unmittelbar zwischen Wettbewerbern als auch mittelbar über einen Dritten begründen. Ferner wird auch auf algorithmenspezifische Compliance-Maßnahmen eingegangen. Schließlich werden die praktischen Herausforderungen bei der Aufdeckung und dem Nachweis solcher Kartelle aufgezeigt.
Ausgehend von Überlegungen des anthropologisch orientierten Psychiaters Erwin Straus geht dieses Buch der Frage nach, welche Bedingungen vorliegen, wenn bestimmte Ereignisse von Personen als bedeutsam erlebt werden. Des Weiteren wird ausführlich erörtert, wie sich Personalität im Menschen ausbildet und inwieweit sie von der gelingenden Integration bedeutungsvoller Ersterlebnisse abhängt. Das dabei zugrundeliegende Person-Konzept stellt einen eigenständigen Syntheseversuch der vier Konzepte von Erwin Straus, Viktor Emil von Gebsattel, Helmuth Plessner und Max Scheler dar. Der Autor arbeitet in oberärztlicher Funktion am Klinikum Schloss Lütgenhof in Dassow, einer Akutklinik für Personale Medizin, integrierte Psychosomatik, Innere Medizin und Psychotherapie