Refine
Has Fulltext
- yes (181) (remove)
Year of publication
- 2023 (181) (remove)
Document Type
- Doctoral Thesis (181) (remove)
Is part of the Bibliography
- yes (181)
Keywords
- climate change (9)
- Klimawandel (8)
- machine learning (6)
- Modellierung (3)
- Reflexion (3)
- körperliche Fitness (3)
- maschinelles Lernen (3)
- physical fitness (3)
- reinforcement learning (3)
- Anden (2)
Institute
- Institut für Biochemie und Biologie (26)
- Institut für Geowissenschaften (24)
- Extern (23)
- Institut für Physik und Astronomie (21)
- Institut für Chemie (18)
- Hasso-Plattner-Institut für Digital Engineering GmbH (17)
- Institut für Umweltwissenschaften und Geographie (13)
- Department Sport- und Gesundheitswissenschaften (7)
- Department Psychologie (6)
- Institut für Ernährungswissenschaft (6)
- Department Erziehungswissenschaft (5)
- Department Linguistik (4)
- Institut für Informatik und Computational Science (4)
- Institut für Mathematik (4)
- Digital Engineering Fakultät (3)
- Fachgruppe Betriebswirtschaftslehre (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (3)
- Fakultät für Gesundheitswissenschaften (2)
- Historisches Institut (2)
- Wirtschaftswissenschaften (2)
- Applied Computational Linguistics (1)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Fachgruppe Soziologie (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Institut für Germanistik (1)
- Institut für Künste und Medien (1)
- Institut für Romanistik (1)
- Language Acquisition (1)
- Patholinguistics/Neurocognition of Language (1)
- Phonology & Phonetics (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Strukturbereich Kognitionswissenschaften (1)
Die vorliegende Studie beschäftigt sich mit dem nach einer Strukturveränderung in der Sekundarstufe I entstandenen Schulmodell der Neuen Mittelschule. Untersucht wird, ob sich durch dieses Schulmodell und der damit intendierten neuen Lehr-, Lern- und Prüfungskultur Zusammenhänge zwischen gemessenen mathematischen Kompetenzen der Schüler und den durch Lehrer vergebenen Jahresnoten feststellen lassen.
Die Literaturrecherche macht deutlich, dass die Kritik an der Monokultur des leh-rerzentrierten Unterrichts zwar zu einer neuen Lehr-, Lern- und Prüfungskultur führt, deren Inhalte sind aber recht unterschiedlich, komplex und nicht eindeutig definiert. In der NMS soll die Leistungsbewertung als Lernhilfe fungieren, aber auch verlässliche Aussagen über die Leistung der Schüler treffen. Zur Wirkung der neuen Lernkultur in der NMS gibt es ebenso keine empirischen Befunde wie über die Wirkung der Leistungsbewertung.
An der empirischen Untersuchung nehmen 79 Schüler der sechsten Schulstufe aus drei Neuen Mittelschulen (dicht besiedelte, mittel besiedelte, dünn besiedelte Gemeinde) in Niederösterreich teil. In jeder Schule werden zwei Klassen untersucht. Dabei werden der Kompetenzstand in Mathematik, Schülerzentriertheit sowie Sozial- und Leistungsdruck aus Sicht der Schüler gemeinsam mit der Jah-resnote erhoben.
Für die Studie wird ein Pfadmodell entwickelt und mit einer Pfadanalyse ausge-wertet. Dabei zeigen sich zwar Zusammenhänge zwischen den gemessenen Kompetenzen in Mathematik und den Jahresnoten. Diese Jahresnoten besitzen über die Klasse bzw. die Schule hinaus aber nur eine bedingte Aussagekraft über die erbrachten Leistungen.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
Mit dem Alter kann eine Zunahme leichtgradiger Entzündungsprozesse beobachtet werden, von denen angenommen wird, dass sie den typischen, altersbedingten Verlust an Muskelmasse, -kraft und -funktion „befeuern“. Diese als Inflammaging bezeichneten Prozesse können auf ein komplexes Zusammenspiel aus einem dysfunktionalen (viszeralen) Fettgewebe, einer Dysbiose und damit einhergehender mikrobiellen Translokation und geringeren Abwehrfähigkeit sowie einer insgesamt zunehmenden Immunseneszenz zurückgeführt werden. In Summa begünstigt ein pro-inflammatorisches Milieu metabolische Störungen und chronische, altersassoziierte Erkrankungen, die das Entzündungsgeschehen aufrechterhalten oder vorantreiben. Neben einem essenziellen Bewegungsmangel trägt auch eine westlich geprägte, industrialisierte Ernährungsweise zum Entzündungsgeschehen und zur Entwicklung chronischer Erkrankungen bei. Daher liegt die Vermutung nahe, dem Entzündungsgeschehen mit ausreichend Bewegung und einer anti-inflammatorischen Ernährung entgegenzuwirken. In dieser Hinsicht werden insbesondere Omega-3-Fettsäuren (Omega-3) mit anti-inflammatorischen Eigenschaften verbunden. Obwohl ein Zusammenhang zwischen dem ernährungsbedingten Inflammationspotenzial bzw. der Zufuhr von Omega-3 und dem Inflammationsprofil bereits untersucht wurde, fehlen bislang Untersuchungen insbesondere bei älteren Erwachsenen, die den Link zwischen dem Inflammationspotenzial der Ernährung und Sarkopenie-relevanten Muskelparametern herstellen.
Aufgrund des Proteinmehrbedarfs zum Erhalt der funktionellen Muskulatur im Alter wurde bereits eine Vielzahl an Sport- und Ernährungsinterventionen durchgeführt, die eine Verbesserung des Muskelstatus mit Hilfe von strukturiertem Krafttraining und einer proteinreichen Ernährung zeigen. Es gibt zudem Hinweise, dass Omega-3 auch die Proteinsynthese verstärken könnten. Unklar ist jedoch, inwiefern eine anti-inflammatorische Ernährung mit Fokus auf Omega-3 sowohl die Entzündungsprozesse als auch den Muskelproteinmetabolismus und die neuromuskuläre Funktionalität im Alter günstig unterstützen kann. Dies vor allem im Hinblick auf die Muskelleistung, die eng mit der Sturzneigung und der Autonomie im Alltag verknüpft ist, aber in Interventionsstudien mit älteren Erwachsenen bisher wenig Berücksichtigung erhielt. Darüber hinaus werden häufig progressive Trainingselemente genutzt, die nach Studienabschluss oftmals wenig Anschluss im Lebensalltag der Betroffenen finden und somit wenig nachhaltig sind. Ziel dieser Arbeit war demnach die Evaluierung einer proteinreichen und zusätzlich mit Omega-3 supplementierten Ernährung in Kombination mit einem wöchentlichen Vibrationstraining und altersgemäßen Bewegungsprogramm auf Inflammation und neuromuskuläre Funktion bei älteren, selbständig lebenden Erwachsenen.
Hierzu wurden zunächst mögliche Zusammenhänge zwischen dem ernährungsbedingten Inflammationspotenzial, ermittelt anhand des Dietary Inflammatory Index, und dem Muskelstatus sowie dem Inflammationsprofil im Alter eruiert. Dazu dienten die Ausgangswerte von älteren, selbständig lebenden Erwachsenen einer postprandialen Interventionsstudie (POST-Studie), die im Querschnitt analysiert wurden. Die Ergebnisse bestätigten, dass eine pro-inflammatorische Ernährung sich einerseits in einem stärkeren Entzündungsgeschehen widerspiegelt und andererseits mit Sarkopenie-relevanten Parametern, wie einer geringeren Muskelmasse und Gehgeschwindigkeit, ungünstig assoziiert ist. Darüber hinaus zeigten sich diese Zusammenhänge auch in Bezug auf die Handgreifkraft bei den inaktiven, älteren Erwachsenen der Studie.
Anschließend wurde in einer explorativ ausgerichteten Pilot-Interventionsstudie (AIDA-Studie) in einem dreiarmigen Design untersucht, inwieweit sich eine Supplementierung mit Omega-3 unter Voraussetzung einer optimierten Proteinzufuhr und altersgemäßen Sportintervention mit Vibrationstraining auf die neuromuskuläre Funktion und Inflammation bei selbständig lebenden, älteren Erwachsenen auswirkt. Nach acht Wochen Intervention zeigte sich, dass eine mit Omega-3 supplementierte, proteinreiche Ernährung die Muskelleistung insbesondere bei den älteren Männern steigerte. Während sich die Kontrollgruppe nach acht Wochen Sportintervention nicht verbesserte, bestätigte sich zusätzlich eine Verbesserung der Beinkraft und der Testzeit beim Stuhl-Aufsteh-Test der älteren Erwachsenen mit einer proteinreichen Ernährung in Kombination mit der Sportintervention.
Darüber hinaus wurde deutlich, dass die zusätzliche Omega-3-Supplementierung insbesondere bei den Männern eine Reduktion der pro-inflammatorischen Zytokine im Serum zur Folge hatte. Allerdings spiegelten sich diese Beobachtungen nicht auf Genexpressionsebene in mononukleären Immunzellen oder in der LPS-induzierten Sekretion der Zytokine und Chemokine in Vollblutzellkulturen wider. Dies erfordert weitere Untersuchungen.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
Individuals with aphasia vary in the speed and accuracy they perform sentence comprehension tasks. Previous results indicate that the performance patterns of individuals with aphasia vary between tasks (e.g., Caplan, DeDe, & Michaud, 2006; Caplan, Michaud, & Hufford, 2013a). Similarly, it has been found that the comprehension performance of individuals with aphasia varies between homogeneous test sentences within and between sessions (e.g., McNeil, Hageman, & Matthews, 2005). These studies ascribed the variability in the performance of individuals with aphasia to random noise. This conclusion would be in line with an influential theory on sentence comprehension in aphasia, the resource reduction hypothesis (Caplan, 2012). However, previous studies did not directly compare variability in language-impaired and language-unimpaired adults. Thus, it is still unclear how the variability in sentence comprehension differs between individuals with and without aphasia. Furthermore, the previous studies were exclusively carried out in English. Therefore, the findings on variability in sentence processing in English still need to be replicated in a different language.
This dissertation aims to give a systematic overview of the patterns of variability in sentence comprehension performance in aphasia in German and, based on this overview, to put the resource reduction hypothesis to the test. In order to reach the first aim, variability was considered on three different dimensions (persons, measures, and occasions) following the classification by Hultsch, Strauss, Hunter, and MacDonald (2011). At the dimension of persons, the thesis compared the performance of individuals with aphasia and language-unimpaired adults. At the dimension of measures, this work explored the performance across different sentence comprehension tasks (object manipulation, sentence-picture matching). Finally, at the dimension of occasions, this work compared the performance in each task between two test sessions. Several methods were combined to study variability to gain a large and diverse database. In addition to the offline comprehension tasks, the self-paced-listening paradigm and the visual world eye-tracking paradigm were used in this work.
The findings are in line with the previous results. As in the previous studies, variability in sentence comprehension in individuals with aphasia emerged between test sessions and between tasks. Additionally, it was possible to characterize the variability further using hierarchical Bayesian models. For individuals with aphasia, it was shown that both between-task and between-session variability are unsystematic. In contrast to that, language-unimpaired individuals exhibited systematic differences between measures and between sessions. However, these systematic differences occurred only in the offline tasks. Hence, variability in sentence comprehension differed between language-impaired and language-unimpaired adults, and this difference could be narrowed down to the offline measures.
Based on this overview of the patterns of variability, the resource reduction hypothesis was evaluated. According to the hypothesis, the variability in the performance of individuals with aphasia can be ascribed to random fluctuations in the resources available for sentence processing. Given that the performance of the individuals with aphasia varied unsystematically, the results support the resource reduction hypothesis. Furthermore, the thesis proposes that the differences in variability between language-impaired and language-unimpaired adults can also be explained by the resource reduction hypothesis. More specifically, it is suggested that the systematic changes in the performance of language-unimpaired adults are due to decreasing fluctuations in available processing resources. In parallel, the unsystematic variability in the performance of individuals with aphasia could be due to constant fluctuations in available processing resources. In conclusion, the systematic investigation of variability contributes to a better understanding of language processing in aphasia and thus enriches aphasia research.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
Im Rahmen dieser Dissertation wurden die erstmaligen Totalsynthesen der Arylnaphthalen-Lignane Alashinol D, Vitexdoin C, Vitrofolal E, Noralashinol C1 und Ternifoliuslignan E vorgestellt. Der Schlüsselschritt der entwickelten Methode, basiert auf einer regioselektiven intramolekularen Photo-Dehydro-Diels-Alder (PDDA)-Reaktion, die mittels UV-Strahlung im Durchflussreaktor durchgeführt wurde. Bei der Synthese der PDDA-Vorläufer (Diarylsuberate) wurde eine Synthesestrategie nach dem Baukastenprinzip verfolgt. Diese ermöglicht die Darstellung asymmetrischer komplexer Systeme aus nur wenigen Grundbausteinen und die Totalsynthese einer Vielzahl an Lignanen. In systematischen Voruntersuchungen konnte zudem die klare Überlegenheit der intra- gegenüber der intermolekularen PDDA-Reaktion aufgezeigt werden. Dabei stellte sich eine Verknüpfung der beiden Arylpropiolester über einen Korksäurebügel, in para-Position, als besonders effizient heraus. Werden asymmetrisch substituierte Diarylsuberate, bei denen einer der endständigen Estersubstituenten durch eine Trimethylsilyl-Gruppe oder ein Wasserstoffatom ersetzt wurde, verwendet, durchlaufen diese Systeme eine regioselektive Cyclisierung und als Hauptprodukt werden Naphthalenophane mit einem Methylester in 3-Position erhalten. Mit Hilfe von umfangreichen Experimenten zur Funktionalisierung der 4-Position, konnte zudem gezeigt werden, dass die Substitution der nucleophilen Cycloallen-Intermediate, während der PDDA-Reaktion, generell durch die Zugabe von N-Halogen-Succinimiden möglich ist. In Anbetracht der geringen Ausbeuten haben diese intermolekularen Abfangreaktionen, jedoch keinen präparativen Nutzen für die Totalsynthesen von Lignanen. Mit dem Ziel die allgemeinen photochemischen Reaktionsbedingungen zu optimieren, wurde erstmalig die triplettsensibilisierte PDDA-Reaktion vorgestellt. Durch die Verwendung von Xanthon als Sensibilisator wurde der Einsatz von effizienteren UVA-Lichtquellen ermöglicht, wodurch die Gefahr einer Photozersetzung durch Überbestrahlung minimiert wurde. Im Vergleich zur direkten Anregung mit UVB-Strahlung, konnten die Ausbeuten mit indirekter Anregung durch einen Photokatalysator signifikant gesteigert werden. Die grundlegenden Erkenntnisse und die entwickelten Synthesestrategien dieser Arbeit, können dazu beitragen zukünftig die Erschließung neuer pharmakologisch interessanter Lignane voranzutreiben.
1 Bisher ist nur die semisynthetische Darstellung von Noralashinol C ausgehend von Hydroxymatairesinol literaturbekannt.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Within the context of United Nations (UN) environmental institutions, it has become apparent that intergovernmental responses alone have been insufficient for dealing with pressing transboundary environmental problems. Diverging economic and political interests, as well as broader changes in power dynamics and norms within global (environmental) governance, have resulted in negotiation and implementation efforts by UN member states becoming stuck in institutional gridlock and inertia. These developments have sparked a renewed debate among scholars and practitioners about an imminent crisis of multilateralism, accompanied by calls for reforming UN environmental institutions. However, with the rise of transnational actors and institutions, states are not the only relevant actors in global environmental governance. In fact, the fragmented architectures of different policy domains are populated by a hybrid mix of state and non-state actors, as well as intergovernmental and transnational institutions. Therefore, coping with the complex challenges posed by severe and ecologically interdependent transboundary environmental problems requires global cooperation and careful management from actors beyond national governments.
This thesis investigates the interactions of three intergovernmental UN treaty secretariats in global environmental governance. These are the secretariats of the United Nations Framework Convention on Climate Change, the Convention on Biological Diversity, and the United Nations Convention to Combat Desertification. While previous research has acknowledged the increasing autonomy and influence of treaty secretariats in global policy-making, little attention has been paid to their strategic interactions with non-state actors, such as non-governmental organizations, civil society actors, businesses, and transnational institutions and networks, or their coordination with other UN agencies. Through qualitative case-study research, this thesis explores the means and mechanisms of these interactions and investigates their consequences for enhancing the effectiveness and coherence of institutional responses to underlying and interdependent environmental issues.
Following a new institutionalist ontology, the conceptual and theoretical framework of this study draws on global governance research, regime theory, and scholarship on international bureaucracies. From an actor-centered perspective on institutional interplay, the thesis employs concepts such as orchestration and interplay management to assess the interactions of and among treaty secretariats. The research methodology involves structured, focused comparison, and process-tracing techniques to analyze empirical data from diverse sources, including official documents, various secondary materials, semi-structured interviews with secretariat staff and policymakers, and observations at intergovernmental conferences.
The main findings of this research demonstrate that secretariats employ tailored orchestration styles to manage or bypass national governments, thereby raising global ambition levels for addressing transboundary environmental problems. Additionally, they engage in joint interplay management to facilitate information sharing, strategize activities, and mobilize relevant actors, thereby improving coherence across UN environmental institutions. Treaty secretariats play a substantial role in influencing discourses and knowledge exchange with a wide range of actors. However, they face barriers, such as limited resources, mandates, varying leadership priorities, and degrees of politicization within institutional processes, which may hinder their impact. Nevertheless, the secretariats, together with non-state actors, have made progress in advancing norm-building processes, integrated policy-making, capacity building, and implementation efforts within and across framework conventions. Moreover, they utilize innovative means of coordination with actors beyond national governments, such as data-driven governance, to provide policy-relevant information for achieving overarching governance targets.
Importantly, this research highlights the growing interactions between treaty secretariats and non-state actors, which not only shape policy outcomes but also have broader implications for the polity and politics of international institutions. The findings offer opportunities for rethinking collective agency and actor dynamics within UN entities, addressing gaps in institutionalist theory concerning the interaction of actors in inter-institutional spaces. Furthermore, the study addresses emerging challenges and trends in global environmental governance that are pertinent to future policy-making. These include reflections for the debate on reforming international institutions, the role of emerging powers in a changing international world order, and the convergence of public and private authority through new alliance-building and a division of labor between international bureaucracies and non-state actors in global environmental governance.
The near-Earth space environment is a highly complex system comprised of several regions and particle populations hazardous to satellite operations. The trapped particles in the radiation belts and ring current can cause significant damage to satellites during space weather events, due to deep dielectric and surface charging. Closer to Earth is another important region, the ionosphere, which delays the propagation of radio signals and can adversely affect navigation and positioning. In response to fluctuations in solar and geomagnetic activity, both the inner-magnetospheric and ionospheric populations can undergo drastic and sudden changes within minutes to hours, which creates a challenge for predicting their behavior. Given the increasing reliance of our society on satellite technology, improving our understanding and modeling of these populations is a matter of paramount importance.
In recent years, numerous spacecraft have been launched to study the dynamics of particle populations in the near-Earth space, transforming it into a data-rich environment. To extract valuable insights from the abundance of available observations, it is crucial to employ advanced modeling techniques, and machine learning methods are among the most powerful approaches available. This dissertation employs long-term satellite observations to analyze the processes that drive particle dynamics, and builds interdisciplinary links between space physics and machine learning by developing new state-of-the-art models of the inner-magnetospheric and ionospheric particle dynamics.
The first aim of this thesis is to investigate the behavior of electrons in Earth's radiation belts and ring current. Using ~18 years of electron flux observations from the Global Positioning System (GPS), we developed the first machine learning model of hundreds-of-keV electron flux at Medium Earth Orbit (MEO) that is driven solely by solar wind and geomagnetic indices and does not require auxiliary flux measurements as inputs. We then proceeded to analyze the directional distributions of electrons, and for the first time, used Fourier sine series to fit electron pitch angle distributions (PADs) in Earth's inner magnetosphere. We performed a superposed epoch analysis of 129 geomagnetic storms during the Van Allen Probes era and demonstrated that electron PADs have a strong energy-dependent response to geomagnetic activity. Additionally, we showed that the solar wind dynamic pressure could be used as a good predictor of the PAD dynamics. Using the observed dependencies, we created the first PAD model with a continuous dependence on L, magnetic local time (MLT) and activity, and developed two techniques to reconstruct near-equatorial electron flux observations from low-PA data using this model.
The second objective of this thesis is to develop a novel model of the topside ionosphere. To achieve this goal, we collected observations from five of the most widely used ionospheric missions and intercalibrated these data sets. This allowed us to use these data jointly for model development, validation, and comparison with other existing empirical models. We demonstrated, for the first time, that ion density observations by Swarm Langmuir Probes exhibit overestimation (up to ~40-50%) at low and mid-latitudes on the night side, and suggested that the influence of light ions could be a potential cause of this overestimation. To develop the topside model, we used 19 years of radio occultation (RO) electron density profiles, which were fitted with a Chapman function with a linear dependence of scale height on altitude. This approximation yields 4 parameters, namely the peak density and height of the F2-layer and the slope and intercept of the linear scale height trend, which were modeled using feedforward neural networks (NNs). The model was extensively validated against both RO and in-situ observations and was found to outperform the International Reference Ionosphere (IRI) model by up to an order of magnitude. Our analysis showed that the most substantial deviations of the IRI model from the data occur at altitudes of 100-200 km above the F2-layer peak. The developed NN-based ionospheric model reproduces the effects of various physical mechanisms observed in the topside ionosphere and provides highly accurate electron density predictions.
This dissertation provides an extensive study of geospace dynamics, and the main results of this work contribute to the improvement of models of plasma populations in the near-Earth space environment.
Traditionally, mental disorders have been identified based on specific symptoms and standardized diagnostic systems such as the DSM-5 and ICD-10. However, these symptom-based definitions may only partially represent neurobiological and behavioral research findings, which could impede the development of targeted treatments. A transdiagnostic approach to mental health research, such as the Research Domain Criteria (RDoC) approach, maps resilience and broader aspects of mental health to associated components. By investigating mental disorders in a transnosological way, we can better understand disease patterns and their distinguishing and common factors, leading to more precise prevention and treatment options.
Therefore, this dissertation focuses on (1) the latent domain structure of the RDoC approach in a transnosological sample including healthy controls, (2) its domain associations to disease severity in patients with anxiety and depressive disorders, and (3) an overview of the scientific results found regarding Positive (PVS) and Negative Valence Systems (NVS) associated with mood and anxiety disorders.
The following main results were found: First, the latent RDoC domain structure for PVS and NVS, Cognitive Systems (CS), and Social Processes (SP) could be validated using self-report and behavioral measures in a transnosological sample. Second, we found transdiagnostic and disease-specific associations between those four domains and disease severity in patients with depressive and anxiety disorders. Third, the scoping review showed a sizable amount of RDoC research conducted on PVS and NVS in mood and anxiety disorders, with research gaps for both domains and specific conditions.
In conclusion, the research presented in this dissertation highlights the potential of the transnosological RDoC framework approach in improving our understanding of mental disorders. By exploring the latent RDoC structure and associations with disease severity and disease-specific and transnosological associations for anxiety and depressive disorders, this research provides valuable insights into the full spectrum of psychological functioning. Additionally, this dissertation highlights the need for further research in this area, identifying both RDoC indicators and research gaps. Overall, this dissertation represents an important contribution to the ongoing efforts to improve our understanding and the treatment of mental disorders, particularly within the commonly comorbid disease spectrum of mood and anxiety disorders.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Technologically important, environmentally friendly InP quantum dots (QDs) typically used as green and red emitters in display devices can achieve exceptional photoluminescence quantum yields (PL QYs) of near-unity (95-100%) when the-state-of-the-art core/shell heterostructure of the ZnSe inner/ZnS outer shell is elaborately applied. Nevertheless, it has only led to a few industrial applications as QD liquid crystal display (QD–LCD) which is applied to blue backlight units, even though QDs has a lot of possibilities that able to realize industrially feasible applications, such as QD light-emitting diodes (QD‒LEDs) and luminescence solar concentrator (LSC), due to their functionalizable characteristics.
Before introducing the main research, the theoretical basis and fundamentals of QDs are described in detail on the basis of the quantum mechanics and experimental synthetic results, where a concept of QD and colloidal QD, a type-I core/shell structure, a transition metal doped semiconductor QDs, the surface chemistry of QD, and their applications (LSC, QD‒LEDs, and EHD jet printing) are sequentially elucidated for better understanding. This doctoral thesis mainly focused on the connectivity between QD materials and QD devices, based on the synthesis of InP QDs that are composed of inorganic core (core/shell heterostructure) and organic shell (surface ligands on the QD surface). In particular, as for the former one (core/shell heterostructure), the ZnCuInS mid-shell as an intermediate layer is newly introduced between a Cu-doped InP core and a ZnS shell for LSC devices. As for the latter one (surface ligands), the ligand effect by 1-octanethiol and chloride ion are investigated for the device stability in QD‒LEDs and the printability of electro-hydrodynamic (EHD) jet printing system, in which this research explores the behavior of surface ligands, based on proton transfer mechanism on the QD surface.
Chapter 3 demonstrates the synthesis of strain-engineered highly emissive Cu:InP/Zn–Cu–In–S (ZCIS)/ZnS core/shell/shell heterostructure QDs via a one-pot approach. When this unconventional combination of a ZCIS/ZnS double shelling scheme is introduced to a series of Cu:InP cores with different sizes, the resulting Cu:InP/ZCIS/ZnS QDs with a tunable near-IR PL range of 694–850 nm yield the highest-ever PL QYs of 71.5–82.4%. These outcomes strongly point to the efficacy of the ZCIS interlayer, which makes the core/shell interfacial strain effectively alleviated, toward high emissivity. The presence of such an intermediate ZCIS layer is further examined by comparative size, structural, and compositional analyses. The end of this chapter briefly introduces the research related to the LSC devices, fabricated from Cu:InP/ZCIS/ZnS QDs, currently in progress.
Chapter 4 mainly deals with ligand effect in 1-octanethiol passivation of InP/ZnSe/ZnS QDs in terms of incomplete surface passivation during synthesis. This chapter demonstrates the lack of anionic carboxylate ligands on the surface of InP/ZnSe/ZnS quantum dots (QDs), where zinc carboxylate ligands can be converted to carboxylic acid or carboxylate ligands via proton transfer by 1-octanethiol. The as-synthesized QDs initially have an under-coordinated vacancy surface, which is passivated by solvent ligands such as ethanol and acetone. Upon exposure of 1-octanethiol to the QD surface, 1-octanthiol effectively induces the surface binding of anionic carboxylate ligands (derived from zinc carboxylate ligands) by proton transfer, which consequently exchanges ethanol and acetone ligands that bound on the incomplete QD surface. The systematic chemical analyses, such as thermogravimetric analysis‒mass spectrometry and proton nuclear magnetic resonance spectroscopy, directly show the interplay of surface ligands, and it associates with QD light-emitting diodes (QD‒LEDs).
Chapter 5 shows the relation between material stability of QDs and device stability of QD‒LEDs through the investigation of surface chemistry and shell thickness. In typical III–V colloidal InP quantum dots (QDs), an inorganic ZnS outermost shell is used to provide stability when overcoated onto the InP core. However, this work presents a faster photo-degradation of InP/ZnSe/ZnS QDs with a thicker ZnS shell than that with a thin ZnS shell when 1-octanethiol was applied as a sulfur source to form ZnS outmost shell. Herein, 1-octanethiol induces the form of weakly-bound carboxylate ligand via proton transfer on the QD surface, resulting in a faster degradation at UV light even though a thicker ZnS shell was formed onto InP/ZnSe QDs. Detailed insight into surface chemistry was obtained from proton nuclear magnetic resonance spectroscopy and thermogravimetric analysis–mass spectrometry. However, the lifetimes of the electroluminescence devices fabricated from InP/ZnSe/ZnS QDs with a thick or a thin ZnS shell show surprisingly the opposite result to the material stability of QDs, where the QD light-emitting diodes (QD‒LEDs) with a thick ZnS shelled QDs maintained its luminance more stable than that with a thin ZnS shelled QDs. This study elucidates the degradation mechanism of the QDs and the QD light-emitting diodes based on the results and discuss why the material stability of QDs is different from the lifetime of QD‒LEDs.
Chapter 6 suggests a method how to improve a printability of EHD jet printing when QD materials are applied to QD ink formulation, where this work introduces the application of GaP mid-shelled InP QDs as a role of surface charge in EHD jet printing technique. In general, GaP intermediate shell has been introduced in III–V colloidal InP quantum dots (QDs) to enhance their thermal stability and quantum efficiency in the case of type-I core/shell/shell heterostructure InP/GaP/ZnSeS QDs. Herein, these highly luminescent InP/GaP/ZnSeS QDs were synthesized and applied to EHD jet printing, by which this study demonstrates that unreacted Ga and Cl ions on the QD surface induce the operating voltage of cone jet and cone jet formation to be reduced and stabilized, respectively. This result indicates GaP intermediate shell not only improves PL QY and thermal stability of InP QDs but also adjusts the critical flow rate required for cone-jet formation. In other words, surface charges of quantum dots can have a significant role in forming cone apex in the EHD capillary nozzle. For an industrially convenient validation of surface charges on the QD surface, Zeta potential analyses of QD solutions as a simple method were performed, as well as inductively coupled plasma optical emission spectrometry (ICP-OES) for a composition of elements.
Beyond the generation of highly emissive InP QDs with narrow FWHM, these studies talk about the connection between QD material and QD devices not only to make it a vital jumping-off point for industrially feasible applications but also to reveal from chemical and physical standpoints the origin that obstructs the improvement of device performance experimentally and theoretically.
In dieser Dissertation konnten erfolgreich mechanisch stabile Hydrogele über eine freie radikalische Polymerisation (FRP) in Wasser synthetisiert werden. Dabei diente vor allem das Sulfobetain SPE als Monomer. Dieses wurde mit dem über eine nukleophile Substitution erster bzw. zweiter Ordnung hergestellten Vernetzer TMBEMPA/Br umgesetzt.
Die entstandenen Netzwerke wurden im Gleichgewichtsquellzustand im Wesentlichen mittels Niederfeld-Kernresonanzspektroskopie, Röntgenkleinwinkelstreuung (SAXS), Rasterelektronenmikroskopie mit Tieftemperaturtechnik (Kryo-REM), dynamisch-mechanische Analyse (DMA), Rheologie, thermogravimetrische Analyse (TGA) und dynamische Differenzkalorimetrie (DSC) analysiert.
Das hierarchisch aufgebaute Netzwerk wurde anschließend für die matrixgesteuerten Mineralisation von Calciumphosphat und –carbonat genutzt. Über das alternierende Eintauchverfahren (engl. „alternate soaking method“) und der Variation von Mineralisationsparametern, wie pH-Wert, Konzentration c und Temperatur T konnten dann verschiedene Modifikationen des Calciumphosphats generiert werden. Das entstandene Hybridmaterial wurde qualitativ mittels Röntgenpulverdiffraktometrie (XRD), abgeschwächte Totalreflexion–fouriertransformierte Infrarot Spektroskopie (ATR-FTIR), Raman-Spektroskopie, Rasterelektronenmikroskopie (REM) mit energiedispersiver Röntgenspektroskopie (EDXS) und optischer Mikroskopie (OM) als auch quantitative mittels Gravimetrie und TGA analysiert.
Für die potentielle Verwendung in der Medizintechnik, z.B. als Implantatmaterial, ist die grundlegende Einschätzung der Wechselwirkung zwischen Hydrogel bzw. Hybridmaterial und verschiedener Zelltypen unerlässlich. Dazu wurden verschiedene Zelltypen, wie Einzeller, Bakterien und adulte Stammzellen verwendet. Die Wechselwirkung mit Peptidsequenzen von Phagen komplettiert das biologische Unterkapitel.
Hydrogele sind mannigfaltig einsetzbar. Diese Arbeit fasst daher weitere Projektperspektiven, auch außerhalb des biomedizinischem Anwendungsspektrums, auf. So konnten erste Ansätze zur serienmäßige bzw. maßgeschneiderte Produktion über das „Inkjet“ Verfahren erreicht werden. Um dies ermöglichen zu können wurden erfolgreich weitere Synthesestrategien, wie die Photopolymerisation und die redoxinitiierte Polymerisation, ausgenutzt. Auch die Eignung als Filtermaterial oder Superabsorber wurde analysiert.
Layered structures are ubiquitous in nature and industrial products, in which individual layers could have different mechanical/thermal properties and functions independently contributing to the performance of the whole layered structure for their relevant application. Tuning each layer affects the performance of the whole layered system.
Pores are utilized in various disciplines, where low density, but large surfaces are demanded. Besides, open and interconnected pores would act as a transferring channel for guest chemical molecules. The shape of pores influences compression behavior of the material. Moreover, introducing pores decreases the density and subsequently the mechanical strength. To maintain defined mechanical strength under various stress, porous structure can be reinforced by adding reinforcement agent such as fiber, filler or layered structure to bear the mechanical stress on demanded application.
In this context, this thesis aimed to generate new functions in bilayer systems by combining layers having different moduli and/or porosity, and to develop suitable processing techniques to access these structures.
Manufacturing processes of layered structures employ often organic solvents mostly causing environmental pollution. In this regard, the studied bilayer structures here were manufactured by processes free of organic solvents.
In this thesis, three bilayer systems were studied to answer the individual questions.
First, while various methods of introducing pores in melt-phase are reported for one-layer constructs with simple geometry, can such methods be applied to a bilayer structure, giving two porous layers?
This was addressed with Bilayer System 1. Two porous layers were obtained from melt-blending of two different polyurethanes (PU) and polyvinyl alcohol (PVA) in a co-continuous phase followed by sequential injection molding and leaching the PVA phase in deionized water. A porosity of 50 ± 5% with a high interconnectivity was obtained, in which the pore sizes in both layers ranged from 1 µm to 100 µm with an average of 22 µm in both layers. The obtained pores were tailored by applying an annealing treatment at relevant high temperatures of 110 °C and 130 °C, which allowed the porosity to be kept constant. The disadvantage of this system is that a maximum of 50% porosity could be reached and removal of leaching material in the weld line section of both layers is not guaranteed. Such a construct serves as a model for bilayer porous structure for determining structure-property relationships with respect to the pore size, porosity and mechanical properties of each layer. This fabrication method is also applicable to complex geometries by designing a relevant mold for injection molding.
Secondly, utilizing scCO2 foaming process at elevated temperature and pressure is considered as a green manufacturing process. Employing this method as a post-treatment can alter the history orientation of polymer chains created by previous fabrication methods. Can a bilayer structure be fabricated by a combination of sequential injection molding and scCO2 foaming process, in which a porous layer is supported by a compact layer?
Such a construct (Bilayer System 2) was generated by sequential injection molding of a PCL (Tm ≈ 58 °C) layer and a PLLA (Tg ≈ 58 °C) layer. Soaking this structure in the autoclave with scCO2 at T = 45 °C and P = 100 bar led to the selective foaming of PCL with a porosity of 80%, while the PLA layer was kept compact. The scCO2 autoclave led to the formation of a porous core and skin layer of the PCL, however, the degree of crystallinity of PLLA layer increased from 0 to 50% at the defined temperature and pressure. The microcellular structure of PCL as well as the degree of crystallinity of PLLA were controlled by increasing soaking time.
Thirdly, wrinkles on surfaces in micro/nano scale alter the properties, which are surface-related. Wrinkles are formed on a surface of a bilayer structure having a compliant substrate and a stiff thin film. However, the reported wrinkles were not reversible. Moreover, dynamic wrinkles in nano and micro scale have numerous examples in nature such as gecko foot hair offering reversible adhesion and an ability of lotus leaves for self-cleaning altering hydrophobicity of the surface. It was envisioned to imitate this biomimetic function on the bilayer structure, where self-assembly on/off patterns would be realized on the surface of this construct.
In summary, developing layered constructs having different properties/functions in the individual layer or exhibiting a new function as the consequence of layered structure can give novel insight for designing layered constructs in various disciplines such as packaging and transport industry, aerospace industry and health technology.
The present work focuses on the preparation and characterisation of various nanoplastic reference material candidates. Nanoplastics are plastic particles in a size range of 1 − 1000 nm. The term has emerged in recent years as a distinction from the larger microplastic (1 − 1000 μm). Since the properties of the two plastic particles differ significantly due to their size, it is important to have nanoplastic reference material. This was produced for the polymer types polypropylene (PP) and polyethylene (PE) as well as poly(lactic acid) (PLA).
A top-down method was used to produce the nanoplastic for the polyolefins PP and PE (Section 3.1). The material was crushed in acetone using an Ultra-Turrax disperser and then transferred to water. This process produces reproducible results when repeated, making it suitable for the production of a reference material candidate. The resulting dispersions were investigated using dynamic and electrophoretic light scattering. The dispersion of PP particles gave a mean hydrodynamic diameter Dh = 180.5±5.8 nm with a PDI = 0.08±0.02 and a zeta potential ζ = −43.0 ± 2.0 mV. For the PE particles, a diameter Dh = 344.5 ± 34.6 nm, with a PDI = 0.39 ± 0.04 and a zeta potential of ζ = −40.0 ± 4.2 mV was measured. This means that both dispersions are nanoplastics, as the particles are < 1000 nm. Furthermore, the starting material of these polyolefin particles was mixed with a gold salt and thereby the nanoplastic production was repeated in order to obtain nanoplastic particles doped with gold, which should simplify the detection of the particles.
In addition to the top-down approach, a bottom-up method was chosen for the PLA (Section 3.2). Here, the polymer was first dissolved in THF and stabilised with a surfactant. Then water was added and THF evaporated, leaving an aqueous PLA dispersion. This experiment was also investigated using dynamic light scattering and, when repeated, yielded reproducible results, i. e. an average hydrodynamic diameter of Dh = 89.2 ± 3.0 nm. Since the mass concentration of PLA in the dispersion is known due to the production method, a Python notebook was tested for these samples to calculate the number and mass concentration of nano(plastic) particles using the MALS results. Similar to the plastic produced in Section 3.1, gold was also incorporated into the particle, which was achieved by adding a dispersion of gold clusters with a diameter of D = 1.15 nm in an ionic liquid (IL) in the production process. Here, the preparation of the gold clusters in the ionic liquid 1-ethyl-3-methylimidazolium dicyanamide ([Emim][DCA]) represented the first use of an IL both as a reducing agent for gold and as a solvent for the gold clusters. Two volumes of gold cluster dispersion were added during the PLA particle synthesis. The addition of the gold clusters leads to much larger particles. The nanoPLA with 0.8% Au has a diameter of Dh = 198.0 ± 10.8 nm and the nanoPLA with 4.9% Au has a diameter of Dh = 259.1 ± 23.7 nm. First investigations by TEM imaging show that the nanoPLA particles form hollow spheres when gold clusters are added. However, the mechanism leading to these structures remains unclear.
The deformation style of mountain belts is greatly influenced by the upper plate architecture created during preceding deformation phases. The Mesozoic Salta Rift extensional phase has created a dominant structural and lithological framework that controls Cenozoic deformation and exhumation patterns in the Central Andes. Studying the nature of these pre-existing anisotropies is a key to understanding the spatiotemporal distribution of exhumation and its controlling factors. The Eastern Cordillera in particular, has a structural grain that is in part controlled by Salta Rift structures and their orientation relative to Andean shortening. As a result, there are areas in which Andean deformation prevails and areas where the influence of the Salta Rift is the main control on deformation patterns.
Between 23 and 24°S, lithological and structural heterogeneities imposed by the Lomas de Olmedo sub-basin (Salta Rift basin) affect the development of the Eastern Cordillera fold-and-thrust belt. The inverted northern margin of the sub-basin now forms the southern boundary of the intermontane Cianzo basin. The former western margin of the sub-basin is located at the confluence of the Subandean Zone, the Santa Barbara System and the Eastern Cordillera. Here, the Salta Rift basin architecture is responsible for the distribution of these morphotectonic provinces. In this study we use a multi-method approach consisting of low-temperature (U-Th-Sm)/He and apatite fission track thermochronology, detrital geochronology, structural and sedimentological analyses to investigate the Mesozoic structural inheritance of the Lomas de Olmedo sub-basin and Cenozoic exhumation patterns.
Characterization of the extension-related Tacurú Group as an intermediate succession between Paleozoic basement and the syn-rift infill of the Lomas de Olmedo sub-basin reveals a Jurassic maximum depositional age. Zircon (U-Th-Sm)/He cooling ages record a pre-Cretaceous onset of exhumation for the rift shoulders in the northern part of the sub-basin, whereas the western shoulder shows a more recent onset (140–115 Ma). Variations in the sedimentary thickness of syn- and post-rift strata document the evolution of accommodation space in the sub-basin. While the thickness of syn-rift strata increases rapidly toward the northern basin margin, the post-rift strata thickness decreases toward the margin and forms a condensed section on the rift shoulder.
Inversion of Salta Rift structures commenced between the late Oligocene and Miocene (24–15 Ma) in the ranges surrounding the Cianzo basin. The eastern and western limbs of the Cianzo syncline, located in the hanging wall of the basin-bounding Hornocal fault, show diachronous exhumation. At the same time, western fault blocks of Tilcara Range, south of the Cianzo basin, began exhuming in the late Oligocene to early Miocene (26–16 Ma). Eastward propagation to the frontal thrust and to the Paleozoic strata east of the Tilcara Range occurred in the middle Miocene (22–10 Ma) and the late Miocene–early Pliocene (10–4 Ma), respectively.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
Stars under influence: evidence of tidal interactions between stars and substellar companions
(2023)
Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit?
Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age.
In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
Soft-template strategy enables the fabrication of composite nanomaterials with desired functionalities and structures. In this thesis, soft templates, including poly(ionic liquid) nanovesicles (PIL NVs), self-assembled polystyrene-b-poly(2-vinylpyridine) (PS-b-P2VP) particles, and glycopeptide (GP) biomolecules have been applied for the synthesis of versatile composite particles of PILs/Cu, molybdenum disulfide/carbon (MoS2/C), and GP-carbon nanotubes-metal (GP-CNTs-metal) composites, respectively. Subsequently, their possible applications as efficient catalysts in two representative reactions, i.e. CO2 electroreduction (CO2ER) and reduction of 4-nitrophenol (4-NP), have been studied, respectively.
In the first work, PIL NVs with a tunable particle size of 50 to 120 nm and a shell thickness of 15 to 60 nm have been prepared via one-step free radical polymerization. By increasing monomer concentration for polymerization, their nanoscopic morphology can evolve from hollow NVs to dense spheres, and finally to directional worms, in which a multi-lamellar packing of PIL chains occurred in all samples. The obtained PIL NVs with varied shell thickness have been in situ functionalized with ultra-small Cu nanoparticles (Cu NPs, 1-3 nm) and subsequently employed as the electrocatalysts for CO2ER. The hollow PILs/Cu composite catalysts exhibit a 2.5-fold enhancement in selectivity towards C1 products compared to the pristine Cu NPs. This enhancement is primarily attributed to the strong electronic interactions between the Cu NPs and the surface functionalities of PIL NVs. This study casts new aspects on using nanostructured PILs as novel electrocatalyst supports in efficient CO2 conversion.
In the second work, a novel approach towards fast degradation of 4-NP has been developed using porous MoS2/C particles as catalysts, which integrate the intrinsically catalytic property of MoS2 with its photothermal conversion capability. Various MoS2/C composite particles have been prepared using assembled PS-b-P2VP block copolymer particles as sacrificed soft templates. Intriguingly, the MoS2/C particles exhibit tailored morphologies including pomegranate-like, hollow, and open porous structures. Subsequently, the photothermal conversion performance of these featured particles has been compared under near infrared (NIR) light irradiation. When employing the open porous MoS2/C particles as the catalyst for the reduction of 4-NP, the reaction rate constant has increased by 1.5-fold under light illumination. This catalytic enhancement mainly results from the open porous architecture and photothermal conversion performance of the MoS2 particles. This proposed strategy offers new opportunities for efficient photothermal-assisted catalysis.
In the third work, a facile and green approach towards the fabrication of GP-CNTs-metal composites has been proposed, which utilizes a versatile GP biomolecule both as a stabilizer for CNTs in water and as a reducing agent for noble metal ions. The abundant hydrogen bonds in GP molecules bestow the formed GP-CNTs with excellent plasticity, enabling the availability of polymorphic CNTs species ranging from dispersion to viscous paste, gel, and even dough by increasing their concentration. The GP molecules can reduce metal precursors at room temperature without additional reducing agents, enabling the in situ immobilization of metal NPs (e.g. Au, Ag, and Pd) on the CNTs surface. The combination of excellent catalytic property of Pd NPs with photothermal conversion capability of CNTs makes the GP-CNTs-Pd composite a promising catalyst for the efficient degradation of 4-NP. The obtained composite displays a 1.6-fold increase in conversion under NIR light illumination in the reduction of 4-NP, mainly owing to the strong light-to-heat conversion effect of CNTs. Overall, the proposed method opens a new avenue for the synthesis of CNTs composite as a sustainable and versatile catalyst platform.
The results presented in the current thesis demonstrate the significance of using soft templates for the synthesis of versatile composites with tailored nanostructure and functionalities. The investigation of these composite nanomaterials in the catalytic reactions reveals their potential in the development of desired catalysts for emerging catalytic processes, e.g. photothermal-assisted catalysis and electrocatalysis.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
The relevance of physical fitness for children’s and adolescents’ health is indisputable and it is crucial to regularly assess and evaluate children’s and adolescents’ individual physical fitness development to detect potential negative health consequences in time. Physical fitness tests are easy-to-administer, reliable, and valid which is why they should be widely used to provide information on performance development and health status of children and adolescents. When talking about development of physical fitness, two perspectives can be distinguished. One perspective is how the physical fitness status of children and adolescents changed / developed over the past decades (i.e., secular trends). The other perspective covers the analyses how physical fitness develops with increasing age due to growth and maturation processes. Although, the development of children’s and adolescents’ physical fitness has been extensively described and analyzed in the literature, still some questions remain to be uncovered that will be addressed in the present doctoral thesis.
Previous systematic reviews and meta-analyses have examined secular trends in children’s and adolescents’ physical fitness. However, considering that those analyses are by now 15 years old and that updates are available only to limited components of physical fitness, it is time to re-analyze the literature and examine secular trends for selected components of physical fitness (i.e., cardiorespiratory endurance, muscle strength, proxies of muscle power, and speed). Fur-thermore, the available studies on children’s development of physical fitness as well as the ef-fects of moderating variables such as age and sex have been investigated within a long-term ontogenetic perspective. However, the effects of age and sex in the transition from pre-puberty to puberty in the ninth year of life using a short-term ontogenetic perspective and the effect of timing of school enrollment on children’s development of physical fitness have not been clearly identified. Therefore, the present doctoral thesis seeks to complement the knowledge of children’s and adolescents’ physical fitness development by updating secular trend analysis in selected components of physical fitness, by examining short-term ontogenetic cross-sectional developmental differences in children`s physical fitness, and by comparing physical fitness of older- and younger-than-keyage children versus keyage-children. These findings provide valuable information about children’s and adolescents’ physical fitness development to help prevent potential deficits in physical fitness as early as possible and consequently ensure a holistic development and a lifelong healthy life.
Initially, a systematic review to provide an ‘update’ on secular trends in selected components of physical fitness (i.e., cardiorespiratory endurance, relative muscle strength, proxies of muscle power, speed) in children and adolescents aged 6 to 18 years was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. To examine short-term ontogenetic cross-sectional developmental differences and to compare physical fitness of older- and younger-than-keyage children versus keyage-children physical fitness data of 108,295 keyage-children (i.e., aged 8.00 to 8.99 years), 2,586 younger-than-keyage children (i.e., aged 7.00 to 7.99 years), and 26,540 older-than-keyage children (i.e., aged 9.00 to 9.99 years) from the third grade were analyzed. Physical fitness was assessed through the EMOTIKON test battery measuring cardiorespiratory endurance (i.e., 6-min-run test), coordina-tion (i.e., star-run test), speed (i.e., 20-m linear sprint test), and proxies of lower (i.e., standing long jump test) and upper limbs (i.e., ball-push test) muscle power. Statistical inference was based on Linear Mixed Models.
Findings from the systematic review revealed a large initial improvement and an equally large subsequent decline between 1986 and 2010 as well as a stabilization between 2010 and 2015 in cardiorespiratory endurance, a general trend towards a small improvement in relative muscle strength from 1972 to 2015, an overall small negative quadratic trend for proxies of muscle power from 1972 to 2015, and a small-to-medium improvement in speed from 2002 to 2015. Findings from the cross-sectional studies showed that even in a single prepubertal year of life (i.e., ninth year) physical fitness performance develops linearly with increasing chronological age, boys showed better performances than girls in all physical fitness components, and the components varied in the size of sex and age effects. Furthermore, findings revealed that older-than-keyage children showed poorer performance in physical fitness compared to keyage-children, older-than-keyage girls showed better performances than older-than-keyage boys, and younger-than-keyage children outperformed keyage-children.
Due to the varying secular trends in physical fitness, it is recommended to promote initiatives for physical activity and physical fitness for children and adolescents to prevent adverse effects on health and well-being. More precisely, public health initiatives should specifically consider exercising cardiorespiratory endurance and muscle strength because both components showed strong positive associations with markers of health. Furthermore, the findings implied that physical education teachers, coaches, or researchers can utilize a proportional adjustment to individually interpret physical fitness of prepubertal school-aged children. Special attention should be given to the promotion of physical fitness of older-than-keyage children because they showed poorer performance in physical fitness than keyage-children. Therefore, it is necessary to specifically consider this group and provide additional health and fitness programs to reduce their deficits in physical fitness experienced during prior years to guarantee a holistic development.
Search for light primordial black holes with VERITAS using gamma γ-ray and optical observations
(2023)
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of four imaging atmospheric Cherenkov telescopes (IACTs). VERITAS is sensitive to very-high-energy gamma-rays in the range of 100 GeV to >30 TeV. Hypothesized primordial black holes (PBHs) are attractive targets for IACTs. If they exist, their potential cosmological impact reaches beyond the candidacy for constituents of dark matter. The sublunar mass window is the largest unconstrained range of PBH masses. This thesis aims to develop novel concepts searching for light PBHs with VERITAS. PBHs below the sublunar window lose mass due to Hawking radiation. They would evaporate at the end of their lifetime, leading to a short burst of gamma-rays. If PBHs formed at about 10^15 g, the evaporation would occur nowadays. Detecting these signals might not only confirm the existence of PBHs but also prove the theory of Hawking radiation. This thesis probes archival VERITAS data recorded between 2012 and 2021 for possible PBH signals. This work presents a new automatic approach to assess the quality of the VERITAS data. The array-trigger rate and far infrared temperature are well suited to identify periods with poor data quality. These are masked by time cuts to obtain a consistent and clean dataset which contains about 4222 hours. The PBH evaporations could occur at any location in the field of view or time within this data. Only a blind search can be performed to identify these short signals. This thesis implements a data-driven deep learning based method to search for short transient signals with VERITAS. It does not depend on the modelling of the effective area and radial acceptance. This work presents the first application of this method to actual observational IACT data. This thesis develops new concepts dealing with the specifics of the data and the transient detection method. These are reflected in the developed data preparation pipeline and search strategies. After correction for trial factors, no candidate PBH evaporation is found in the data. Thus, new constraints of the local rate of PBH evaporations are derived. At the 99% confidence limit it is below <1.07 * 10^5 pc^-3 yr^-1. This constraint with the new, independent analysis approach is in the range of existing limits for the evaporation rate.
This thesis also investigates an alternative novel approach to searching for PBHs with IACTs. Above the sublunar window, the PBH abundance is constrained by optical microlensing studies. The sampling speed, which is of order of minutes to hours for traditional optical telescopes, is a limiting factor in expanding the limits to lower masses. IACTs are also powerful instruments for fast transient optical astronomy with up to O(ns) sampling. This thesis investigates whether IACTs might constrain the sublunar window with optical microlensing observations. This study confirms that, in principle, the fast sampling speed might allow extending microlensing searches into the sublunar mass window. However, the limiting factor for IACTs is the modest sensitivity to detect changes in optical fluxes. This thesis presents the expected rate of detectable events for VERITAS as well as prospects of possible future next-generation IACTs. For VERITAS, the rate of detectable microlensing events in the sublunar range is ~10^-6 per year of observation time. The future prospects for a 100 times more sensitive instrument are at ~0.05 events per year.
Sulfur is essential for the functionality of some important biomolecules in humans. Biomolecules like the Iron-sulfur clusters, tRNAs, Molybdenum cofactor, and some vitamins. The trafficking of sulfur involves proteins collectively called sulfurtransferase. Among these are TUM1, MOCS3, and NFS1.
This research investigated the role of TUM1 for molybdenum cofactor biosynthesis and cytosolic tRNA thiolation in humans. The rhodanese-like protein MOCS3 and the L-cysteine desulfurase (NFS1) have been previously demonstrated to interact with TUM1. These interactions suggested a dual function of TUM1 in sulfur transfer for Moco biosynthesis and cytosolic tRNA thiolation. TUM1 deficiency has been implicated to be responsible for a rare inheritable disorder known as mercaptolactate cysteine disulfiduria (MCDU), which is associated with a mental disorder. This mental disorder is similar to the symptoms of sulfite oxidase deficiency which is characterised by neurological disorders. Therefore, the role of TUM1 as a sulfurtransferase in humans was investigated, in CRISPR/Cas9 generated TUM1 knockout HEK 293T cell lines.
For the first time, TUM1 was implicated in Moco biosynthesis in humans by quantifying the intermediate product cPMP and Moco using HPLC. Comparing the TUM1 knockout cell lines to the wild-type, accumulation and reduction of cPMP and Moco were observed respectively. The effect of TUM1 knockout on the activity of a Moco-dependent enzyme, Sulfite oxidase, was also investigated. Sulfite oxidase is essential for the detoxification of sulfite to sulfate. Sulfite oxidase activity and protein abundance were reduced due to less availability of Moco. This shows that TUM1 is essential for efficient sulfur transfer for Moco biosynthesis. Reduction in cystathionin -lyase in TUM1 knockout cells was quantified, a possible coping mechanism of the cell against sulfite production through cysteine catabolism.
Secondly, the involvement of TUM1 in tRNA thio-modification at the wobble Uridine-34 was reported by quantifying the amount of mcm5s2U and mcm5U via HPLC. The reduction and accumulation of mcm5s2U and mcm5U in TUM1 knockout cells were observed in the nucleoside analysis. Herein, exogenous treatment with NaHS, a hydrogen sulfide donor, rescued the Moco biosynthesis, cytosolic tRNA thiolation, and cell proliferation deficits in TUM1 knockout cells.
Further, TUM1 was shown to impact mitochondria bioenergetics through the measurement of the oxygen consumption rate and extracellular acidification rate (ECAR) via the seahorse cell Mito stress analyzer. Reduction in total ATP production was also measured. This reveals how important TUM1 is for H2S biosynthesis in the mitochondria of HEK 293T.
Finally, the inhibition of NFS1 in HEK 293T and purified NFS1 protein by 2-methylene 3-quinuclidinone was demonstrated via spectrophotometric and radioactivity quantification. Inhibition of NFS1 by MQ further affected the iron-sulfur cluster-dependent enzyme aconitase activity.
Over the last decades, interest in the impact of the intestinal microbiota on host health has steadily increased. Diet is a major factor that influences the gut microbiota and thereby indirectly affects human health. For example, a high fat diet rich in saturated fatty acids led to an intestinal proliferation of the colitogenic bacterium Bilophila (B.) wadsworthia by stimulating the release of the bile acid taurocholate (TC). TC contains the sulfonated head group taurine, which undergoes conversion to sulfide (H2S) by B. wadsworthia. In a colitis prone murine animal model (IL10 / mice), the bloom of B. wadsworthia was accompanied by an exacerbation of intestinal inflammation. B. wadsworthia is able to convert taurine and also other sulfonates to H2S, indicating the potential association of sulfonate utilization and the stimulation of colitogenic bacteria.
This potential link raised the question, whether dietary sulfonates or their sulfonated metabolites stimulate the growth of colitogenic bacteria such as B. wadsworthia and whether these bacteria convert sulfonates to H2S. Besides taurine, which is present in meat, fish and life-style beverages, other dietary sulfonates are part of daily human nutrition. Sulfolipids such as sulfoquinovosyldiacylglycerols (SQDG) are highly abundant in salad, parsley and the cyanobacterium Arthrospira platensis (Spirulina). Based on previous findings, Escherichia (E.) coli releases the polar headgroup sulfoquinovose (SQ) from SQDG. Moreover, E. coli is able to convert SQ to 2,3 dihydroxypropane 1 sulfonate (DHPS) under anoxic conditions. DHPS is also converted to H2S by B. wadsworthia or by other potentially harmful gut bacteria such as members of the genus Desulfovibrio. However, only few studies report the conversion of sulfonates to H2S by bacteria directly isolated from the human intestinal tract. Most sulfonate utilizing bacteria were obtained from environmental sources such as soil or lake sediment or from potentially intestinal sources such as sewage.
In the present study, fecal slurries from healthy human subjects were incubated with sulfonates under strictly anoxic conditions, using formate and lactate as electron donors. Fecal slurries that converted sulfonates to H2S, were used as a source for the isolation of H2S forming bacteria. Isolates were identified based on their 16S ribosomal RNA (16S rRNA) gene sequence. In addition, conventional C57BL/6 mice were fed a semisynthetic diet supplemented with the SQDG rich Spirulina (SD) or a Spirulina free control diet (CD). During the intervention, body weight, water and food intake were monitored and fecal samples were collected. After three weeks, mice were killed and organ weight and size were measured, intestinal sulfonate concentrations were quantified, gut microbiota composition was determined and parameters of intestinal and hepatic fat metabolism were analyzed.
Human fecal slurries converted taurine, isethionate, cysteate, 3 sulfolacate, SQ and DHPS to H2S. However, inter individual differences in the degradation of these sulfonates were observed. Taurine, isethionate, and 3 sulfolactate were utilized by fecal microbiota of all donors, while SQ, DHPS and cysteate were converted to H2S only by microbiota from certain individuals. Bacterial isolates from human feces able to convert sulfonates to H2S were identified as taurine-utilizing Desulfovibrio strains, taurine- and isethionate-utilizing B. wadsworthia, or as SQ- and 3-sulfolactate- utilizing E. coli. In addition, a co culture of E. coli and B. wadsworthia led to complete degradation of SQ to H2S, with DHPS as an intermediate. Of the human fecal isolates, B. wadsworthia and Desulfovibrio are potentially harmful. E. coli strains might be also pathogenic, but isolated E. coli strains from human feces were identified as commensal gut bacteria.
Feeding SD to mice increased the cecal and fecal SQ concentration and altered the microbiota composition, but the relative abundance of SQDG or SQ converting bacteria and colitogenic bacteria was not enriched in mice fed SD for 21 days. SD did not affect the relative abundance of Enterobacteriaceae, to which the SQDG- and SQ-utilizing E. coli strain belong to. Furthermore, the abundance of B. wadsworthia decreased from day 2 to day 9 in feces, but recovered afterwards in the same mice. In cecum, the family Desulfovibrionaceae, to which B. wadsworthia and Desulfovibrio belong to, were reduced. No changes in the number of B. wadsworthia in cecal contents or of Desulfovibrionaceae in feces were observed. SD led to a mild activation of the immune system, which was not observed in control mice fed CD. Mice fed SD had an increased body weight, a higher adipose tissue weight, and a decreased liver weight compared to the control mice, suggesting an impact of Spirulina supplementation on fat metabolism. However, expression levels of genes involved in intestinal and hepatic intracellular lipid uptake and availability were reduced. Further investigations on the lipid metabolism at protein level could help to clarify these discrepancies.
In summary, humans differ in the ability of their fecal microbiota to utilize dietary sulfonates. While sulfonates stimulated the proliferation of potentially colitogenic isolates from human fecal slurries, the increased availability of SQ in Spirulina fed conventional mice did not lead to an enrichment of such bacteria. Presence or absence of these bacteria may explain the inter individual differences in sulfonate conversion observed for fecal slurries. This work provides new insights in the ability of intestinal bacteria to utilize sulfonates and thus, contributes to a better understanding of microbiota-mediated effects on dietary sulfonate utilization. Interestingly, feeding of the Spirulina-supplemented diet led to body-weight gain in mice in the first two days of intervention, the reasons for which are unknown.
Ribosomes decode mRNA to synthesize proteins. Ribosomes, once considered static, executing machines, are now viewed as dynamic modulators of translation. Increasingly detailed analyses of structural ribosome heterogeneity led to a paradigm shift toward ribosome specialization for selective translation. As sessile organisms, plants cannot escape harmful environments and evolved strategies to withstand. Plant cytosolic ribosomes are in some respects more diverse than those of other metazoans. This diversity may contribute to plant stress acclimation. The goal of this thesis was to determine whether plants use ribosome heterogeneity to regulate protein synthesis through specialized translation. I focused on temperature acclimation, specifically on shifts to low temperatures. During cold acclimation, Arabidopsis ceases growth for seven days while establishing the responses required to resume growth. Earlier results indicate that ribosome biogenesis is essential for cold acclimation. REIL mutants (reil-dkos) lacking a 60S maturation factor do not acclimate successfully and do not resume growth. Using these genotypes, I ascribed cold-induced defects of ribosome biogenesis to the assembly of the polypeptide exit tunnel (PET) by performing spatial statistics of rProtein changes mapped onto the plant 80S structure. I discovered that growth cessation and PET remodeling also occurs in barley, suggesting a general cold response in plants. Cold triggered PET remodeling is consistent with the function of Rei-1, a REIL homolog of yeast, which performs PET quality control. Using seminal data of ribosome specialization, I show that yeast remodels the tRNA entry site of ribosomes upon change of carbon sources and demonstrate that spatially constrained remodeling of ribosomes in metazoans may modulate protein synthesis. I argue that regional remodeling may be a form of ribosome specialization and show that heterogeneous cytosolic polysomes accumulate after cold acclimation, leading to shifts in the translational output that differs between wild-type and reil-dkos. I found that heterogeneous complexes consist of newly synthesized and reused proteins. I propose that tailored ribosome complexes enable free 60S subunits to select specific 48S initiation complexes for translation. Cold acclimated ribosomes through ribosome remodeling synthesize a novel proteome consistent with known mechanisms of cold acclimation. The main hypothesis arising from my thesis is that heterogeneous/ specialized ribosomes alter translation preferences, adjust the proteome and thereby activate plant programs for successful cold acclimation.
Reiz der Revolution
(2023)
Die Dissertation untersucht die vielseitigen Verflechtungen und Transfers im Rahmen der deutschen Nicaraguasolidarität der späten 1970er und der 1980er Jahre. Bereits im Vorfeld ihres Machtantritts hatten die Sandinistas in beiden Lagern um ausländische staatliche und zivile Unterstützung geworben. Nun gestalteten sie mit dem sandinistischen Reformstaat zugleich ein internationales Netz an Solidaritätsbeziehungen aus, die zur Finanzierung ihrer sozialreformerischen Programme, aber auch zur Legitimation ihrer Herrschaft dienten.
Allein in der Bundesrepublik entstanden mehrere hundert Solidaritätsgruppen. In der DDR löste die politische Führung eine staatlich gelenkte Solidarisierung mit Nicaragua aus, der sich zehntausende Menschen und unabhängige Basisinitiativen anschlossen. Trotz ihrer Verwurzelung in rivalisierenden Systemen und der Heterogenität ihrer Weltbilder – von christlicher Soziallehre bis zur kritischen Linken – arbeiteten etliche Solidaritätsinitiativen in beiden Ländern am selben Zielobjekt: einem Nicaragua jenseits der Blöcke. Gemeinsam mit ihren nicaraguanischen Projektpartner_innen eröffneten sie auf transnationaler Ebene einen neuen Raum für Kommunikation und stießen dabei auf Differenzen und Auseinandersetzungen über politische Ideen, die beiderseits des Atlantiks neue Praktiken anregten.
Die Forschungsarbeit basiert auf einer umfangreichen Quellenauswertung in insgesamt 13 Archiven, darunter das Archiv der Robert-Havemann-Gesellschaft, das Archiv der BStU, verschiedene westdeutsche Bewegungsarchive und die archivalischen Nachlässe des nicaraguanischen Kulturministeriums.
Reflexion und Reflexivität
(2023)
Reflexion gilt in der Lehrkräftebildung als eine Schlüsselkategorie der professionellen Entwicklung. Entsprechend wird auf vielfältige Weise die Qualität reflexionsbezogener Kompetenzen untersucht. Eine Herausforderung hierbei kann in der Annahme bestehen, von der Analyse schriftlicher Reflexionen unmittelbar auf die Reflexivität einer Person zu schließen, da Reflexion stets kontextspezifisch als Abbild reflexionsbezogener Argumentationsprozesse angesehen werden sollte und reflexionsbezogenen Dispositionen unterliegt. Auch kann die Qualität einer Reflexion auf mehreren Dimensionen bewertet werden, ohne quantifizierbare, absolute Aussagen treffen zu können.
Daher wurden im Rahmen einer Physik-Videovignette N = 134 schriftliche Fremdreflexionen verfasst und kontextspezifische reflexionsbezogene Dispositionen erhoben. Expert*innen erstellten theoriegeleitet Qualitätsbewertungen zur Breite, Tiefe, Kohärenz und Spezifität eines jeden Reflexionstextes. Unter Verwendung computerbasierter Klassifikations- und Analyseverfahren wurden weitere Textmerkmale erhoben. Mittels explorativer Faktorenanalyse konnten die Faktoren Qualität, Quantität und Deskriptivität gefunden werden. Da alle konventionell eingeschätzten Qualitätsbewertungen durch einen Faktor repräsentiert wurden, konnte ein maximales Qualitätskorrelat kalkuliert werden, zu welchem jede schriftliche Fremdreflexion im Rahmen der vorliegenden Vignette eine computerbasiert bestimmbare Distanz aufweist. Diese Distanz zum maximalen Qualitätskorrelat konnte validiert werden und kann die Qualität der schriftlichen Reflexionen unabhängig von menschlichen Ressourcen quantifiziert repräsentieren. Abschließend konnte identifiziert werden, dass ausgewählte Dispositionen in unterschiedlichem Maße mit der Reflexionsqualität zusammenhängen. So konnten beispielsweise bezogen auf das Physik-Fachwissen minimale Zusammenhänge identifiziert werden, wohingegen Werthaltung sowie wahrgenommene Unterrichtsqualität eng mit der Qualität einer schriftlichen Reflexion in Verbindung stehen können.
Es wird geschlussfolgert, dass reflexionsbezogene Dispositionen moderierenden Einfluss auf Reflexionen nehmen können. Es wird empfohlen bei der Erhebung von Reflexion mit dem Ziel der Kompetenzmessung ausgewählte Dispositionen mit zu erheben. Weiter verdeutlicht diese Arbeit die Möglichkeit, aussagekräftige Quantifizierungen auch in der Analyse komplexer Konstrukte vorzunehmen. Durch computerbasierte Qualitätsabschätzungen können objektive und individuelle Analysen und differenzierteres automatisiertes Feedback ermöglicht werden.
Recurrences in past climates
(2023)
Our ability to predict the state of a system relies on its tendency to recur to states it has visited before. Recurrence also pervades common intuitions about the systems we are most familiar with: daily routines, social rituals and the return of the seasons are just a few relatable examples. To this end, recurrence plots (RP) provide a systematic framework to quantify the recurrence of states. Despite their conceptual simplicity, they are a versatile tool in the study of observational data. The global climate is a complex system for which an understanding based on observational data is not only of academical relevance, but vital for the predurance of human societies within the planetary boundaries. Contextualizing current global climate change, however, requires observational data far beyond the instrumental period. The palaeoclimate record offers a valuable archive of proxy data but demands methodological approaches that adequately address its complexities. In this regard, the following dissertation aims at devising novel and further developing existing methods in the framework of recurrence analysis (RA). The proposed research questions focus on using RA to capture scale-dependent properties in nonlinear time series and tailoring recurrence quantification analysis (RQA) to characterize seasonal variability in palaeoclimate records (‘Palaeoseasonality’).
In the first part of this thesis, we focus on the methodological development of novel approaches in RA. The predictability of nonlinear (palaeo)climate time series is limited by abrupt transitions between regimes that exhibit entirely different dynamical complexity (e.g. crossing of ‘tipping points’). These possibly depend on characteristic time scales. RPs are well-established for detecting transitions and capture scale-dependencies, yet few approaches have combined both aspects. We apply existing concepts from the study of self-similar textures to RPs to detect abrupt transitions, considering the most relevant time scales. This combination of methods further results in the definition of a novel recurrence based nonlinear dependence measure. Quantifying lagged interactions between multiple variables is a common problem, especially in the characterization of high-dimensional complex systems. The proposed ‘recurrence flow’ measure of nonlinear dependence offers an elegant way to characterize such couplings. For spatially extended complex systems, the coupled dynamics of local variables result in the emergence of spatial patterns. These patterns tend to recur in time. Based on this observation, we propose a novel method that entails dynamically distinct regimes of atmospheric circulation based on their recurrent spatial patterns. Bridging the two parts of this dissertation, we next turn to methodological advances of RA for the study of Palaeoseasonality. Observational series of palaeoclimate ‘proxy’ records involve inherent limitations, such as irregular temporal sampling. We reveal biases in the RQA of time series with a non-stationary sampling rate and propose a correction scheme.
In the second part of this thesis, we proceed with applications in Palaeoseasonality. A review of common and promising time series analysis methods shows that numerous valuable tools exist, but their sound application requires adaptions to archive-specific limitations and consolidating transdisciplinary knowledge. Next, we study stalagmite proxy records from the Central Pacific as sensitive recorders of mid-Holocene El Niño-Southern Oscillation (ENSO) dynamics. The records’ remarkably high temporal resolution allows to draw links between ENSO and seasonal dynamics, quantified by RA. The final study presented here examines how seasonal predictability could play a role for the stability of agricultural societies. The Classic Maya underwent a period of sociopolitical disintegration that has been linked to drought events. Based on seasonally resolved stable isotope records from Yok Balum cave in Belize, we propose a measure of seasonal predictability. It unveils the potential role declining seasonal predictability could have played in destabilizing agricultural and sociopolitical systems of Classic Maya populations.
The methodological approaches and applications presented in this work reveal multiple exciting future research avenues, both for RA and the study of Palaeoseasonality.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials
(2023)
In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself.
The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways.
Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields.
REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin.
Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
Hantaviruses (HVs) are a group of zoonotic viruses that infect human beings primarily through aerosol transmission of rodent excreta and urine samplings. HVs are classified geographically into: Old World HVs (OWHVs) that are found in Europe and Asia, and New World HVs (NWHVs) that are observed in the Americas. These different strains can cause severe hantavirus diseases with pronounced renal syndrome or severe cardiopulmonary system distress. HVs can be extremely lethal, with NWHV infections reaching up to 40 % mortality rate. HVs are known to generate epidemic outbreaks in many parts of the world including Germany, which has seen periodic HV infections over the past decade. HV has a trisegmented genome. The small segment (S) encodes the nucleocapsid protein (NP), the middle segment (M) encodes the glycoproteins (GPs) Gn and Gc which forms up to tetramers and primarily monomers \& dimers upon independent expression respectively and large segment (L) encodes RNA dependent RNA polymerase (RdRp). Interactions between these viral proteins are crucial in providing mechanistic insights into HV virion development. Despite best efforts, there continues to be lack of quantification of these associations in living cells. This is required in developing the mechanistic models for HV viral assembly. This dissertation focuses on three key questions pertaining to the initial steps of virion formation that primarily involves the GPs and NP.
The research investigations in this work were completed using Fluorescence Correlation Spectroscopy (FCS) approaches. FCS is frequently used in assessing the biophysical features of bio-molecules including protein concentration and diffusion dynamics and circumvents the requirement of protein overexpression. FCS was primarily applied in this thesis to evaluate protein multimerization, at single cell resolution.
The first question addressed which GP spike formation model proposed by Hepojoki et al.(2010) appropriately describes the evidence in living cells. A novel in cellulo assay was developed to evaluate the amount of fluorescently labelled and unlabeled GPs upon co-expression. The results clearly showed that Gn and Gc initially formed a heterodimeric Gn:Gc subunit. This sub-unit then multimerizes with congruent Gn:Gc subunits to generate the final GP spike. Based on these interactions, models describing the formation of GP complex (with multiple GP spike subunits) were additionally developed.
HV GP assembly primarily takes place in the Golgi apparatus (GA) of infected cells. Interestingly, NWHV GPs are hypothesized to assemble at the plasma membrane (PM). This led to the second research question in this thesis, in which a systematic comparison between OWHV and NWHV GPs was conducted to validate this hypothesis. Surprisingly, GP localization at the PM was congruently observed with OWHV and NWHV GPs. Similar results were also discerned with OWHV and NWHV GP localization in the absence of cytoskeletal factors that regulate HV trafficking in cells.
The final question focused on quantifying the NP-GP interactions and understanding their influence of NP and GP multimerization. Gc mutlimers were detected in the presence of NP and complimented by the presence of localized regions of high NP-Gc interactions in the perinuclear region of living cells. Gc-CT domain was shown to influence NP-Gc associations. Gn, on the other hand, formed up to tetrameric complexes, independent from the presence of NP.
The results in this dissertation sheds light on the initial steps of HV virion formation by quantifying homo and heterotypic interactions involving NP and GPs, which otherwise are very difficult to perform. Finally, the in cellulo methodologies implemented in this work can be potentially extended to understand other key interactions involved in HV virus assembly.
Biomolecules such as proteins and lipids have vital roles in numerous cellular functions, including biomolecule transport, protein functions, cellular homeostasis and biomembrane integrity. Traditional biochemistry methods do not provide precise information about cellular biomolecule distribution and behavior under native environmental conditions since they are not transferable to live cell samples. Consequently, this can lead to inaccuracies in quantifying biomolecule interactions due to potential complexities arising from the heterogeneity of native biomembranes. To overcome these limitations, minimal invasive microscopic techniques, such as fluorescence fluctuation spectroscopy (FFS) in combination with fluorescence proteins (FPs) and fluorescence lipid analogs, have been developed. FFS techniques and membrane property sensors enable the quantification of various parameters, including concentration, dynamics, oligomerization, and interaction of biomolecules in live cell samples.
In this work, several FFS approaches and membrane property sensors were implemented and employed to examine biological processes of diverse context. Multi-color scanning fluorescence fluctuation spectroscopy (sFCS) was used the examine protein oligomerization, protein-protein interactions (PPIs) and protein dynamics at the cellular plasma membrane (PM). Additionally, two-color number and brightness (N&B) analysis was extended with the cross-correlation analysis in order to quantify hetero-interactions of proteins in the PM with very slow motion, which would not accessible with sFCS due strong initial photobleaching. Furthermore, two semi-automatic analysis pipelines were designed: spectral Förster resonance energy transfer (FRET) analysis to study changes in membrane charge at the inner leaflet of the PM, and spectral generalized polarization (GP) imaging and spectral phasor analysis to monitor changes in membrane fluidity and order.
An important parameter for studying PPIs is molecular brightness, which directly determines oligomerization and can be extracted from FFS data. However, FPs often display complex photophysical transitions, including dark states. Therefore, it is crucial to characterize FPs for their dark-states to ensure reliable oligomerization measurements. In this study, N&B and sFCS analysis were applied to determine photophysical properties of novel green FPs under different conditions (i.e., excitation power and pH) in living cells. The results showed that the new FPs, mGreenLantern (mGL) and Gamillus, exhibited the highest molecular brightness at the cost of lower photostability. The well-established monomeric enhanced green fluorescent protein (mEGFP) remained the best option to investigate PPIs at lower pH, while mGL was best suited for neutral pH, and Gamillus for high pH. These findings provide guidance for selecting an appropriate FP to quantify PPIs via FFS under different environmental conditions.
Next, several biophysical fluorescence microscopy approaches (i.e., sFCS, GP imaging, membrane charge FRET) were employed to monitor changes in lipid-lipid-packing in biomembranes in different biological context. Lipid metabolism in cancer cells is known to support rapid proliferation and metastasis. Therefore, targeting lipid synthesis or membrane integrity holds immense promise as an anticancer strategy. However, the mechanism of action of the novel agent erufosine (EPC3) on membrane stability is not fully under
stood. The present work revealed that EPC3 reduces lipid packing and composition as well as increased membrane fluidity and dynamic, hence, modifies lipid-lipid-interaction. These effects on membrane integrity were likely triggered by modulations in lipid metabolism and membrane organization. In the case of influenza A virus (IAV) infection, regulation of lipid metabolism is crucial for multiple steps in IAV replication and is related to the pathogenicity of IAV. Here, it is shown for the first time that IAV infection triggers a local enrichment of negatively charged lipids at the inner leaflet of the PM, which decreases membrane fluidity and dynamic, as well as increases lipid packing at the assembly site in living cells. This suggests that IAV alters lipid-lipid interactions and organization at the PM. Overall, this work highlights the potential of biophysical techniques as a screening platform for studying membrane properties in living cells at the single-cell level.
Finally, this study addressed remaining questions about the early stage of IAV assembly. The recruitment of matrix protein 1 (M1) and its interaction with other viral surface proteins, hemagglutinin (HA), neuraminidase (NA), and matrix protein 2 (M2), has been a subject of debate due to conflicting results. In this study, different FFS approaches were performed in transfected cells to investigate interactions between IAV proteins themselves and host factors at the PM. FFS measurements revealed that M2 interacts strongly with M1, leading to the translocation of M1 to the PM. This interaction likely took place along the non-canonical pathway, as evidenced by the detection of an interaction between M2 and the host factor LC3-II, leading to the recruitment of LC3-II to the PM. Moreover, weaker interaction was observed between HA and membrane-bound M1, and no interaction between NA and M1. Interestingly, higher oligomeric states of M1 were only detectable in infected cells. These results indicate that M2 initiates virion assembly by recruiting M1 to the PM, which may serve as a platform for further interactions with viral proteins and host factors.
This thesis is concerned with the phenomenon of quantifier scope ambiguities. This phenomenon has been researched extensively, both from a theoretical and from an empirical point of view. Nevertheless, there are still a number of under-researched topics in the field of quantifier scope, which will be the main focus of this thesis. I will take a closer look at three languages, English, German, and the Asante Twi dialect of Akan (Kwa, Niger-Kongo). The goal is a better understanding of the phenomenon of quantifier scope both within each language, as well as from a cross-linguistic perspective. First, this thesis will provide a series of experiments that allow a direct cross-linguistic comparison between English and German – two languages about which specific claims have been made in the literature. I will also provide exploratory research in the case of Asante Twi, where so far, no work has been dedicated specifically to the study of quantifier scope. The work on Asante Twi will go beyond quantifier scope and also target the quantifier and determiner system in general. The question is not only if particular scope readings are possible or not, but also which factors contribute to an increase or decrease of scope availability, and if there are factors that block certain scope readings altogether. While some of the results confirm and thereby strengthen previous claims, other results contradict general assumptions in the literature. This is particularly the case for inverse readings in German and inverse readings across clause-boundaries.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
Life on Earth is diverse and ranges from unicellular organisms to multicellular creatures like humans. Although there are theories about how these organisms might have evolved, we understand little about how ‘life’ started from molecules. Bottom-up synthetic biology aims to create minimal cells by combining different modules, such as compartmentalization, growth, division, and cellular communication.
All living cells have a membrane that separates them from the surrounding aqueous medium and helps to protect them. In addition, all eukaryotic cells have organelles that are enclosed by intracellular membranes. Each cellular membrane is primarily made of a lipid bilayer with membrane proteins. Lipids are amphiphilic molecules that assemble into molecular bilayers consisting of two leaflets. The hydrophobic chains of the lipids in the two leaflets face each other, and their hydrophilic headgroups face the aqueous surroundings. Giant unilamellar vesicles (GUVs) are model membrane systems that form large compartments with a size of many micrometers and enclosed by a single lipid bilayer. The size of GUVs is comparable to the size of cells, making them good membrane models which can be studied using an optical microscope. However, after the initial preparation, GUV membranes lack membrane proteins which have to be reconstituted into these membranes by subsequent preparation steps. Depending on the protein, it can be either attached via anchor lipids to one of the membrane leaflets or inserted into the lipid bilayer via its transmembrane domains.
The first step is to prepare the GUVs and then expose them to an exterior solution with proteins. Various protocols have been developed for the initial preparation of GUVs. For the second step, the GUVs can be exposed to a bulk solution of protein or can be trapped in a microfluidic device and then supplied with the protein solution. To minimize the amount of solution and for more precise measurements, I have designed a microfluidic device that has a main channel, and several dead-end side channels that are perpendicular to the main channel. The GUVs are trapped in the dead-end channels. This design exchanges the solution around the GUVs via diffusion from the main channel, thus shielding the GUVs from the flow within the main channel. This device has a small volume of just 2.5 μL, can be used without a pump and can be combined with a confocal microscope, enabling uninterrupted imaging of the GUVs during the experiments. I used this device for most of the experiments on GUVs that are discussed in this thesis.
In the first project of the thesis, a lipid mixture doped with an anchor lipid was used that can bind to a histidine chain (referred to as His-tag(ged) or 6H) via the metal cation Ni2+. This method is widely used for the biofunctionalization of GUVs by attaching proteins without a transmembrane domain. Fluorescently labeled His-tags which are bound to a membrane can be observed in a confocal microscope. Using the same lipid mixture, I prepared the GUVs with different protocols and investigated the membrane composition of the resulting GUVs by evaluating the amount of fluorescently labeled His-tagged molecules bound to their membranes. I used the microfluidic device described above to expose the outer leaflet of the vesicle to a constant concentration of the His-tagged molecules. Two fluorescent molecules with a His-tag were studied and compared: green fluorescent protein (6H-GFP) and fluorescein isothiocyanate (6H-FITC). Although the quantum yield in solution is similar for both molecules, the brightness of the membrane-bound 6H-GFP is higher than the brightness of the membrane-bound 6H-FITC. The observed difference in the brightness reveals that the fluorescence of the 6H-FITC is quenched by the anchor lipid via the Ni2+ ion. Furthermore, my measurements also showed that the fluorescence intensity of the membranebound His-tagged molecules depends on microenvironmental factors such as pH. For both 6H-GFP and 6H-FITC, the interaction with the membrane is quantified by evaluating the equilibrium dissociation constant. The membrane fluorescence is measured as a function of the fluorophores’ molar concentration. Theoretical analysis of these data leads to the equilibrium dissociation constants of (37.5 ± 7.5) nM for 6H-GFP and (18.5 ± 3.7) nM for 6H-FITC.
The anchor lipid mentioned previously used the metal cation Ni2+ to mediate the bond between the anchor lipid and the His-tag. The Ni2+ ion can be replaced by other transition metal ions. Studies have shown that Co3+ forms the strongest bonds with the His-tags attached to proteins. In these studies, strong oxidizing agents were used to oxidize the Co2+ mediated complex with the His-tagged protein to a Co3+ mediated complex. This procedure puts the proteins at risk of being oxidized as well. In this thesis, the vesicles were first prepared with anchor lipids without any metal cation. The Co3+ was added to these anchor lipids and finally the His-tagged protein was added to the GUVs to form the Co3+ mediated bond. This system was also established using the microfluidic device.
The different preparation procedures of GUVs usually lead to vesicles with a spherical morphology. On the other hand, many cell organelles have a more complex architecture with a non spherical topology. One fascinating example is provided by the endoplasmic reticulum (ER) which is made of a continuous membrane and extends throughout the cell in the form of tubes and sheets. The tubes are connected by three-way junctions and form a tubular network of irregular polygons. The formation and maintenance of these reticular networks requires membrane proteins that hydrolyize guanosine triphosphate (GTP). One of these membrane proteins is atlastin. In this thesis, I reconstituted the atlastin protein in GUV membranes using detergent-assisted reconstitution protocols to insert the proteins directly into lipid bilayers.
This thesis focuses on protein reconstitution by binding His-tagged proteins to anchor lipids and by detergent-assisted insertion of proteins with transmembrane domains. It also provides the design of a microfluidic device that can be used in various experiments, one example is the evaluation of the equilibrium dissociation constant for membrane-protein interactions. The results of this thesis will help other researchers to understand the protocols for preparing GUVs, to reconstitute proteins in GUVs, and to perform experiments using the microfluidic device. This knowledge should be beneficial for the long-term goal of combining the different modules of synthetic biology to make a minimal cell.
The Lyman-𝛼 (Ly𝛼) line commonly assists in the detection of high-redshift galaxies, the so-called Lyman-alpha emitters (LAEs). LAEs are useful tools to study the baryonic matter distribution of the high-redshift universe. Exploring their spatial distribution not only reveals the large-scale structure of the universe at early epochs, but it also provides an insight into the early formation and evolution of the galaxies we observe today. Because dark matter halos (DMHs) serve as sites of galaxy formation, the LAE distribution also traces that of the underlying dark matter. However, the details of this relation and their co-evolution over time remain unclear. Moreover, theoretical studies predict that the spatial distribution of LAEs also impacts their own circumgalactic medium (CGM) by influencing their extended Ly𝛼 gaseous halos (LAHs), whose origin is still under investigation. In this thesis, I make several contributions to improve the knowledge on these fields using samples of LAEs observed with the Multi Unit Spectroscopic Explorer (MUSE) at redshifts of 3 < 𝑧 < 6.
Starch is a biopolymer for which, despite its simple composition, understanding the precise mechanism behind its formation and regulation has been challenging. Several approaches and bioanalytical tools can be used to expand the knowledge on the different parts involved in the starch metabolism. In this sense, a comprehensive analysis targeting two of the main groups of molecules involved in this process: proteins, as effectors/regulators of the starch metabolism, and maltodextrins as starch components and degradation products, was conducted in this research work using potato plants (Solanum tuberosum L. cv. Desiree) as model of study. On one side, proteins physically interacting to potato starch were isolated and analyzed through mass spectrometry and western blot for their identification. Alternatively, starch interacting proteins were explored in potato tubers from transgenic plants having antisense inhibition of starch-related enzymes and on tubers stored under variable environmental conditions. Most of the proteins recovered from the starch granules corresponded to previously described proteins having a specific role in the starch metabolic pathway. Another set of proteins could be grouped as protease inhibitors, which were found weakly interacting to starch. Variations in the protein profile obtained after electrophoresis separation became clear when tubers were stored under different temperatures, indicating a differential expression of proteins in response to changing environmental conditions.
On the other side, since maltodextrin metabolism is thought to be involved in both starch initiation and degradation, soluble maltooligosaccharide content in potato tubers was analyzed in this work under diverse experimental variables. For this, tuber disc samples from wild type and transgenic lines strongly repressing either the plastidial or cytosolic form of the -glucan phosphorylase and phosphoglucomutase were incubated with glucose, glucose-6-phosphate, and glucose-1-phosphate solutions to evaluate the influence of such enzymes on the conversion of the carbon sources into soluble maltodextrins, in comparison to wild-type samples. Relative maltodextrin amounts analyzed through capillary electrophoresis equipped with laser-induced fluorescence (CE-LIF) revealed that tuber discs could immediately uptake glucose-1-phosphate and use it to produce maltooligosaccharides with a degree of polymerization of up to 30 (DP30), in contrast to transgenic tubers with strong repression of the plastidial glucan phosphorylase. The results obtained from the maltodextrin analysis support previous indications that a specific transporter for glucose-1-phosphate may exist in both the plant cells and the plastidial membranes, thereby allowing a glucose-6-phosphate independent transport. Furthermore, it confirms that the plastidial glucan phosphorylase is responsible for producing longer maltooligosaccharides in the plastids by catalyzing a glucan polymerization reaction when glucose-1-phosphate is available. All these findings contribute to a better understanding of the role of the plastidial glucan phosphorylase as a key enzyme directly involved in the synthesis and degradation of glucans and their implication on starch metabolism.
Laser cutting is a fast and precise fabrication process. This makes laser cutting a powerful process in custom industrial production. Since the patents on the original technology started to expire, a growing community of tech-enthusiasts embraced the technology and started sharing the models they fabricate online. Surprisingly, the shared models appear to largely be one-offs (e.g., they proudly showcase what a single person can make in one afternoon). For laser cutting to become a relevant mainstream phenomenon (as opposed to the current tech enthusiasts and industry users), it is crucial to enable users to reproduce models made by more experienced modelers, and to build on the work of others instead of creating one-offs.
We create a technological basis that allows users to build on the work of others—a progression that is currently held back by the use of exchange formats that disregard mechanical differences between machines and therefore overlook implications with respect to how well parts fit together mechanically (aka engineering fit).
For the field to progress, we need a machine-independent sharing infrastructure.
In this thesis, we outline three approaches that together get us closer to this:
(1) 2D cutting plans that are tolerant to machine variations. Our initial take is a minimally invasive approach: replacing machine-specific elements in cutting plans with more tolerant elements using mechanical hacks like springs and wedges. The resulting models fabricate on any consumer laser cutter and in a range of materials.
(2) sharing models in 3D. To allow building on the work of others, we build a 3D modeling environment for laser cutting (kyub). After users design a model, they export their 3D models to 2D cutting plans optimized for the machine and material at hand. We extend this volumetric environment with tools to edit individual plates, allowing users to leverage the efficiency of volumetric editing while having control over the most detailed elements in laser-cutting (plates)
(3) converting legacy 2D cutting plans to 3D models. To handle legacy models, we build software to interactively reconstruct 3D models from 2D cutting plans. This allows users to reuse the models in more productive ways. We revisit this by automating the assembly process for a large subset of models.
The above-mentioned software composes a larger system (kyub, 140,000 lines of code). This system integration enables the push towards actual use, which we demonstrate through a range of workshops where users build complex models such as fully functional guitars. By simplifying sharing and re-use and the resulting increase in model complexity, this line of work forms a small step to enable personal fabrication to scale past the maker phenomenon, towards a mainstream phenomenon—the same way that other fields, such as print (postscript) and ultimately computing itself (portable programming languages, etc.) reached mass adoption.
Traditional ways of reducing flood risk have encountered limitations in a climate-changing and rapidly urbanizing world. For instance, there has been a demanding requirement for massive investment in order to maintain a consistent level of security as well as increased flood exposure of people and property due to a false sense of security arising from the flood protection infrastructure. Against this background, nature-based solutions (NBS) have gained popularity as a sustainable and alternative way of dealing with diverse societal challenges such as climate change and biodiversity loss. In particular, their ability to reduce flood risks while also offering ecological benefits has recently received global attention. Diverse co-benefits of NBS that favor both humans and nature are viewed as promising a wide endorsement of NBS. However, people’s perceptions of NBS are not always positive. Local resistance to NBS projects as well as decision-makers’ and practitioners’ unwillingness to adopt NBS have been pointed out as a bottleneck to the successful realization and mainstreaming of NBS. In this regard, there has been a growing necessity to investigate people’s perceptions of NBS. Current research has lacked an integrative perspective of both attitudinal and contextual factors that guide perceptions of NBS; it not only lacks empirical evidence, but a few existing ones are rather conflicting without having underlying theories. This has led to the overarching research question of this dissertation, "What shapes people’s perceptions of NBS in the context of flooding?" The dissertation aims to answer the following sub-questions in the three papers that make up this dissertation: 1. What are the topics reflected in the previous literature influencing perceptions of NBS as a means to reduce hydro-meteorological risks? (Paper I) 2. What are the stimulating and hampering attitudinal and contextual factors for mainstreaming NBS for flood risk management? How are NBS conceptualized? (Paper II) 3. How are public attitudes toward the NBS projects shaped? How do risk-and place-related factors shape individual attitudes toward NBS? (Paper III) This dissertation follows an integrative approach of considering “place” and “risk”, as well as the surrounding context, by analyzing attitudinal (i.e., individual) and contextual (i.e., systemic) factors. “Place” is mainly concerned with affective elements (e.g., bond to locality and natural environment) whereas “risk” is related to cognitive elements (e.g., threat appraisal). The surrounding context provides systemic drivers and barriers with the possibility of interfering the influence of place and risk for perceptions of NBS. To empirically address the research questions, the current status of the knowledge about people’s perceptions of NBS for flood risks was investigated by conducting a systematic review (Paper I). Based on these insights, a case study of South Korea was used to demonstrate key contextual and attitudinal factors for mainstreaming NBS through the lens of experts (Paper II). Lastly, by conducting a citizen survey, it investigated the relationship between the previously discussed concepts in Papers I and II using structural equation modeling, focusing on the core concepts, namely risk and place (Paper III). As a result, Paper I identified the key topics relating to people’s perceptions, including the perceived value of co-benefits, perceived effectiveness of risk reduction effectiveness, participation of stakeholders, socio-economic and place-specific conditions, environmental attitude, and uncertainty of NBS. Paper II confirmed Paper I's findings regarding attitudinal factors. In addition, several contextual hampering or stimulating factors were found to be similar to those of any emerging technologies (i.e., path dependence, lack of operational and systemic capacity). Among all, one of the distinctive features in NBS contexts, at least in the South Korean case, is the politicization of NBS, which can lead to polarization of ideas and undermine the decision-making process. Finally, Paper III provides a framework with the core topics (i.e., place and risk) that were considered critical in Paper I and Paper II. This place-based risk appraisal model (PRAM) connects people at risk and places where hazards (i.e., floods) and interventions (i.e., NBS) take place. The empirical analysis shows that, among the place-related variables, nature bonding was a positive predictor of the perceived risk-reduction effectiveness of NBS, and place identity was a negative predictor of supportive attitude. Among the risk-related variables, threat appraisal had a negative effect on perceived risk reduction effectiveness and supportive attitude, while well-communicated information, trust in flood risk management, and perceived co-benefit were positive predictors. This dissertation proves that the place and risk attributes of NBS shape people’s perceptions of NBS. In order to optimize the NBS implementation, it is necessary to consider the meanings and values held in place before project implementation and how these attributes interact with individual and/or community risk profiles and other contextual factors. With the increasing necessity of using NBS to lower flood risks, these results make important suggestions for the future NBS project strategy and NBS governance.
Learning the causal structures from observational data is an omnipresent challenge in data science. The amount of observational data available to Causal Structure Learning (CSL) algorithms is increasing as data is collected at high frequency from many data sources nowadays. While processing more data generally yields higher accuracy in CSL, the concomitant increase in the runtime of CSL algorithms hinders their widespread adoption in practice. CSL is a parallelizable problem. Existing parallel CSL algorithms address execution on multi-core Central Processing Units (CPUs) with dozens of compute cores. However, modern computing systems are often heterogeneous and equipped with Graphics Processing Units (GPUs) to accelerate computations. Typically, these GPUs provide several thousand compute cores for massively parallel data processing.
To shorten the runtime of CSL algorithms, we design efficient execution strategies that leverage the parallel processing power of GPUs. Particularly, we derive GPU-accelerated variants of a well-known constraint-based CSL method, the PC algorithm, as it allows choosing a statistical Conditional Independence test (CI test) appropriate to the observational data characteristics.
Our two main contributions are: (1) to reflect differences in the CI tests, we design three GPU-based variants of the PC algorithm tailored to CI tests that handle data with the following characteristics. We develop one variant for data assuming the Gaussian distribution model, one for discrete data, and another for mixed discrete-continuous data and data with non-linear relationships. Each variant is optimized for the appropriate CI test leveraging GPU hardware properties, such as shared or thread-local memory. Our GPU-accelerated variants outperform state-of-the-art parallel CPU-based algorithms by factors of up to 93.4× for data assuming the Gaussian distribution model, up to 54.3× for discrete data, up to 240× for continuous data with non-linear relationships and up to 655× for mixed discrete-continuous data. However, the proposed GPU-based variants are limited to datasets that fit into a single GPU’s memory. (2) To overcome this shortcoming, we develop approaches to scale our GPU-based variants beyond a single GPU’s memory capacity. For example, we design an out-of-core GPU variant that employs explicit memory management to process arbitrary-sized datasets. Runtime measurements on a large gene expression dataset reveal that our out-of-core GPU variant is 364 times faster than a parallel CPU-based CSL algorithm. Overall, our proposed GPU-accelerated variants speed up CSL in numerous settings to foster CSL’s adoption in practice and research.
Lanthanide based ceria nanomaterials are important practical materials due to their redox properties that are useful in technology and life sciences. This PhD thesis examined various properties and potential for catalytic and bio-applications of Ln3+-doped ceria nanomaterials. Ce1-xGdxO2-y: Eu3+, gadolinium doped ceria (GDC) (0 ≤ x ≤ 0.4) nanoparticles were synthesized by flame spray pyrolysis (FSP) and studied, followed by 15 % CexZr1-xO2-y: Eu3+|YSZ (0 ≤ x ≤ 1) nanocomposites. Furthermore, Ce1-xYb xO2-y (0.004 ≤ x ≤ 0.22) nanoparticles were synthesized by thermal decomposition and characterized. Finally, CeO2-y: Eu3+ nanoparticles were synthesized by a microemulsion method, biofunctionalized and characterized. The studies undertaken presents a novel approach to structurally elucidate ceria-based nanomaterials by way of Eu3+ and Yb3+ spectroscopy and processing the spectroscopic data with the multi-way decomposition method PARAFAC. Data sets of the three variables: excitation wavelength, emission wavelength and time were used to perform the deconvolution of spectra.
GDC nanoparticles from FSP are nano-sized and of roughly cubic shape and crystal structure (Fm3̅m). Raman data revealed four vibrational modes exhibited by Gd3+ containing samples whereas CeO2-y: Eu3+ displays only two. The room temperature, time-resolved emission spectra recorded at λexcitation = 464 nm show that Gd3+ doping results in significantly altered emission spectra compared to pure ceria. The PARAFAC analysis for the pure ceria samples reveals two species; a high-symmetry species and a low-symmetry species. The GDC samples yield two low-symmetry spectra in the same experiment. High-resolution emission spectra recorded at 4 K after probing the 5D0-7F0 transition revealed additional variation in the low symmetry Eu3+ sites in pure ceria and GDC. The data of the Gd3+-containing samples indicates that the average charge density around the Eu3+ ions in the lattice is inversely related to Gd3+ and oxygen vacancy concentration.
The particle crystallites of the 773 K and 1273 K annealed Yb3+ -ceria nanostructure materials are nano-sized and have a cubic fluorite structure with four Raman vibrational modes. Elemental maps clearly show that cluster formation occurs for 773 K annealed with high Yb3+ ion concentration from 15 mol % in the ceria lattice. These clusters are destroyed with annealing to 1273 K. The emission spectra observed from room temperature and 4 K measurements for the Ce1-xYb xO2-y samples have a manifold that corresponds to the 2F5/2-2F7/2 transition of Yb3+ ions. Some small shifts are observed in the Stark splitting pattern and are induced by the variations of the crystal field influenced by where the Yb3+ ions are located in the crystal lattices in the samples. Upon mixing ceria with high Yb3+ concentrations, the 2F5/2-2F7/2 transition is also observed in the Stark splitting pattern, but the spectra consist of two broad high background dominated peaks. Annealing the nanomaterials at 1273 K for 2 h changes the spectral signature as new peaks emerge. The deconvolution yielded luminescence decay kinetics as well as the accompanying luminescence spectra of three species for each of the low Yb3+ doped ceria samples annealed at 773 K and one species for the 1273 K annealed samples. However, the ceria samples with high Yb3+ concentration annealed at the two temperatures yielded one species with lower decay times as compared to the Yb3+ doped ceria samples after PARAFAC analysis.
Through the calcination of the nanocomposites at two high temperatures, the evolution of the emission patterns from specific Eu3+ lattice sites to indicate structural changes for the nanocomposites was followed. The spectroscopy results effectively complemented the data obtained from the conventional techniques. Annealing the samples at 773 K, resulted in amorphous, unordered domains whereas the TLS of the 1273 K nanocomposites reveal two distinct sites, with most red shifted Eu3+ species coming from pure Eu3+ doped ZrO2 on the YSZ support.
Finally, for Eu3+ doped ceria, successful transfer from hydrophobic to water phase and subsequent biocompatibility was achieved using ssDNA. PARAFAC analysis for the Eu3+ in nanoparticles dispersed in toluene and water revealed one Eu3+ species, with slightly differing surface properties for the nanoparticles as far as the luminescence kinetics and solvent environments were concerned. Several functionalized nanoparticles conjugated onto origami triangles after hybridization were visualized by atomic force microscopy (AFM). Putting all into consideration, Eu3+ and Yb3+ spectroscopy was used to monitor the structural changes and determining the feasibility of the nanoparticle transfer into water. PARAFAC proves to be a powerful tool to analyze lanthanide spectra in crystalline solid materials and in solutions, which are characterized by numerous Stark transitions and where measurements usually yield a superposition of different emission contributions to any given spectrum.
Extreme weather and climate events are one of the greatest dangers for present-day society. Therefore, it is important to provide reliable statements on what changes in extreme events can be expected along with future global climate change. However, the projected overall response to future climate change is generally a result of a complex interplay between individual physical mechanisms originated within the different climate subsystems. Hence, a profound understanding of these individual contributions is required in order to provide meaningful assessments of future changes in extreme events. One aspect of climate change is the recently observed phenomenon of Arctic Amplification and the related dramatic Arctic sea ice decline, which is expected to continue over the next decades. The question to what extent Arctic sea ice loss is able to affect atmospheric dynamics and extreme events over mid-latitudes has received a lot of attention over recent years and still remains a highly debated topic.
In this respect, the objective of this thesis is to contribute to a better understanding on the impact of future Arctic sea ice retreat on European temperature extremes and large-scale atmospheric dynamics.
The outcomes are based on model data from the atmospheric general circulation model ECHAM6. Two different sea ice sensitivity simulations from the Polar Amplification Intercomparison Project are employed and contrasted to a present day reference experiment: one experiment with prescribed future sea ice loss over the entire Arctic, as well as another one with sea ice reductions only locally prescribed over the Barents-Kara Sea.% prescribed over the entire Arctic, as well as only locally over the Barent/Karasea with a present day reference experiment.
The first part of the thesis focuses on how future Arctic sea ice reductions affect large-scale atmospheric dynamics over the Northern Hemisphere in terms of occurrence frequency changes of five preferred Euro-Atlantic circulation regimes. When compared to circulation regimes computed from ERA5 it shows that ECHAM6 is able to realistically simulate the regime structures. Both ECHAM6 sea ice sensitivity experiments exhibit similar regime frequency changes. Consistent with tendencies found in ERA5, a more frequent occurrence of a Scandinavian blocking pattern in midwinter is for instance detected under future sea ice conditions in the sensitivity experiments. Changes in occurrence frequencies of circulation regimes in summer season are however barely detected.
After identifying suitable regime storylines for the occurrence of European temperature extremes in winter, the previously detected regime frequency changes are used to quantify dynamically and thermodynamically driven contributions to sea ice-induced changes in European winter temperature extremes.
It is for instance shown how the preferred occurrence of a Scandinavian blocking regime under low sea ice conditions dynamically contributes to more frequent midwinter cold extreme occurrences over Central Europe. In addition, a reduced occurrence frequency of a Atlantic trough regime is linked to reduced winter warm extremes over Mid-Europe. Furthermore, it is demonstrated how the overall thermodynamical warming effect due to sea ice loss can result in less (more) frequent winter cold (warm) extremes, and consequently counteracts the dynamically induced changes.
Compared to winter season, circulation regimes in summer are less suitable as storylines for the occurrence of summer heat extremes.
Therefore, an approach based on circulation analogues is employed in order to quantify thermodyamically and dynamically driven contributions to sea ice-induced changes of summer heat extremes over three different European sectors. Reduced occurrences of blockings over Western Russia are detected in the ECHAM6 sea ice sensitivity experiments; however, arguing for dynamically and thermodynamically induced contributions to changes in summer heat extremes remains rather challenging.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
Facing the environmental crisis, new technologies are needed to sustain our society. In this context, this thesis aims to describe the properties and applications of carbon-based sustainable materials. In particular, it reports the synthesis and characterization of a wide set of porous carbonaceous materials with high nitrogen content obtained from nucleobases. These materials are used as cathodes for Li-ion capacitors, and a major focus is put on the cathode preparation, highlighting the oxidation resistance of nucleobase-derived materials. Furthermore, their catalytic properties for acid/base and redox reactions are described, pointing to the role of nitrogen speciation on their surfaces. Finally, these materials are used as supports for highly dispersed nickel loading, activating the materials for carbon dioxide electroreduction.
Late-type stars are by far the most frequent stars in the universe and of fundamental interest to various fields of astronomy – most notably to Galactic archaeology and exoplanet research. However, such stars barely change during their main sequence lifetime; their temperature, luminosity, or chemical composition evolve only very slowly over the course of billions of years. As such, it is difficult to obtain the age of such a star, especially when it is isolated and no other indications (like cluster association) can be used. Gyrochronology offers a way to overcome this problem.
Stars, just like all other objects in the universe, rotate and the rate at which stars rotate impacts many aspects of their appearance and evolution. Gyrochronology leverages the observed rotation rate of a late-type main sequence star and its systematic evolution to estimate their ages. Unlike the above-mentioned parameters, the rotation rate of a main sequence star changes drastically throughout its main sequence lifetime; stars spin down. The youngest stars rotate every few hours, whereas much older stars rotate only about once a month, or – in the case of some late M-stars – once in a hundred days. Given that this spindown is systematic (with an additional mass dependence), it gave rise to the idea of using the observed rotation rate of a star (and its mass or a suitable proxy thereof) to estimate a star’s age. This has been explored widely in young stellar open clusters but remains essentially unconstrained for stars older than the sun, and K and M stars older than 1 Gyr.
This thesis focuses on the continued exploration of the spindown behavior to assess, whether gyrochronology remains applicable for stars of old ages, whether it is universal for late-type main sequence stars (including field stars), and to provide calibration mileposts for spindown models. To accomplish this, I have analyzed data from Kepler space telescope for the open clusters Ruprecht 147 (2.7 Gyr old) and M 67 (4 Gyr). Time series photometry data (light curves)
were obtained for both clusters during Kepler’s K2 mission. However, due to technical limitations and telescope malfunctions, extracting usable data from the K2 mission to identify (especially long) rotation periods requires extensive data preparation.
For Ruprecht 147, I have compiled a list of about 300 cluster members from the literature and adopted preprocessed light curves from the Kepler archive where available. They have been cleaned of the gravest of data artifacts but still contained systematics. After correcting them for said artifacts, I was able to identify rotation periods in 31 of them.
For M 67 more effort was taken. My work on Ruprecht 147 has shown the limitations imposed by the preselection of Kepler targets. Therefore, I adopted the time series full frame image directly and performed photometry on a much higher spatial resolution to be able to obtain data for as many stars as possible. This also means that I had to deal with the ubiquitous artifacts in Kepler data. For that, I devised a method that correlates the artificial flux variations with the ongoing drift of the telescope pointing in order to remove it. This process was a large success and I was able to create light curves whose quality match and even exceede those that were created by the Kepler mission – all while operating on higher spatial resolution and processing fainter stars. Ultimately, I was able to identify signs of periodic variability in the (created) light curves for 31 and 47 stars in Ruprecht 147 and M 67, respectively. My data connect well to bluer stars of cluster of the same age and extend for the first time to stars redder than early-K and older than 1 Gyr. The cluster data show a clear flattening in the distribution of Ruprecht 147 and even a downturn for M 67, resulting in a somewhat sinusoidal shape. With that, I have shown that the systematic spindown of stars continues at least until 4 Gyr and stars continue to live on a single surface in age-rotation periods-mass space which allows gyrochronology to be used at least up to that age. However, the shape of the spindown – as exemplified by the newly discovered sinusoidal shape of the cluster sequence – deviates strongly from the expectations.
I then compiled an extensive sample of rotation data in open clusters – very much including my own work – and used the resulting cluster skeleton (with each cluster forming a rip in color-rotation period-mass space) to investigate if field stars follow the same spindown as cluster stars. For the field stars, I used wide binaries, which – with their shared origin and coevality – are in a sense the smallest possible open clusters. I devised an empirical method to evaluate the consistency between the rotation rates of the wide binary components and found that the vast majority of them are in fact consistent with what is observed in open clusters. This leads me to conclude that gyrochronology – calibrated on open clusters – can be applied to determine the ages of field stars.
Neue Wege ins Lehramt
(2023)
Bis zum Jahr 2035 fehlen nach neuesten Prognosen von Klemm (2022) in Deutschland ca. 127.000 Lehrkräfte. Diese große Lücke kann nicht mehr allein durch Lehrkräfte abge-deckt werden, die ein traditionelles Lehramtsstudium absolviert haben. Als Antwort auf den Lehrkräftemangel werden in Schulen in Deutschland daher vermehrt Personen ohne traditio-nelles Lehramtsstudium eingestellt, um die Unterrichtsversorgung zu gewährleisten (KMK, 2022). Nicht-traditionell ausgebildete Lehrkräfte durchlaufen vor ihrer Einstellung in den Schuldienst in der Regel ein alternatives Qualifizierungsprogramm. Diese Qualifizierungs-programme sind jedoch in ihrer zeitlichen und inhaltlichen Ausgestaltung sehr heterogen und setzen unterschiedliche Eingangsvoraussetzungen der Bewerber:innen voraus (Driesner & Arndt, 2020). Sie sind in der Regel jedoch deutlich kürzer als traditionelle Lehramtsstudien-gänge an Hochschulen und Universitäten, um einen schnellen Einstieg in den Schuldienst zu gewährleisten. Die kürzere Qualifizierung geht damit mit einer geringeren Anzahl an Lern- und Lehrgelegenheiten einher, wie sie in einem traditionellen Lehramtsstudium zu finden wäre. Infolgedessen kann davon ausgegangen werden, dass nicht-traditionell ausgebildete Lehrkräfte weniger gut auf die Anforderungen des Lehrberufs vorbereitet sind.
Diese Annahme wird auch oft in der Öffentlichkeit vertreten und die Kritik an alternati-ven Qualifizierungsprogrammen ist groß. So äußerte sich beispielsweise der Präsident des Deutschen Lehrerverbandes, Heinz-Peter Meidinger, im Jahr 2019 gegenüber der Zeitung „Die Welt“, dass die unzureichende Qualifizierung von Quereinsteiger:innen „ein Verbre-chen an den Kindern“ sei (Die Welt, 2019). Die Forschung im deutschsprachigen Raum, die in der Läge wäre, belastbare Befunde für die Unterstützung dieser Kritik liefern zu können, steht jedoch noch am Anfang. Erste Arbeiten weisen generell auf wenige Unterschiede zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin (Kleickmann & An-ders, 2011; Kunina-Habenicht et al., 2013; Oettinghaus, Lamprecht & Korneck, 2014). Ar-beiten, die Unterschiede finden, zeigen diese vor allem im Bereich des pädagogischen Wis-sens zuungunsten der nicht-traditionell ausgebildeten Lehrkräfte. Die Frage nach weiteren Unterschieden, beispielsweise in der Unterrichtsqualität oder im beruflichen Wohlbefinden, ist bislang jedoch für den deutschen Kontext nicht beantwortet worden.
Die vorliegende Arbeit hat zum Ziel, einen Teil dieser Forschungslücken zu schließen. Sie bearbeitet in diesem Zusammenhang im Rahmen von drei Teilstudien die Fragen nach Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin-sichtlich ihrer professionellen Kompetenz, Berufswahlmotivation, Wohlbefinden und Unter-richtsqualität. Die übergeordnete Fragestellung wird vor dem Hintergrund des theoretischen Modells zu den Determinanten und Konsequenzen der professionellen Kompetenz (Kunter, Kleickmann, Klusmann & Richter, 2011) bearbeitet. Dieses Modell wird auch für die theore-tische Aufarbeitung der bereits bestehenden nationalen und internationalen Forschungsarbei-ten zu Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften herangezogen.
Teilstudie I untersucht zunächst Unterschiede in der professionellen Kompetenz zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften. Nach dem Kompetenz-modell nach Baumert und Kunter (2006) werden die beiden Gruppen in den vier Aspekten professioneller Kompetenz – Professionswissen, Überzeugungen, motivationale Orientierun-gen und selbstregulative Fähigkeiten – verglichen. Im Fokus dieser Arbeit stehen traditionell ausgebildete Lehramtsanwärter:innen und die sogenannten Quereinsteiger:innen während des Vorbereitungsdiensts. Mittels multivariater Kovarianzanalysen wurde eine Sekundärdaten-analyse des Projekts COACTIV-R durchgeführt und Unterschiede analysiert.
Teilstudie II beleuchtet sowohl Determinanten als auch Konsequenzen professioneller Kompetenz. Auf Seiten der Determinanten werden Unterschiede in der Berufswahlmotivati-on zwischen Lehrkräften mit und ohne traditionellem Lehramtsstudium untersucht. Ferner erfolgt die Analyse von Unterschieden im beruflichen Wohlbefinden (emotionale Erschöp-fung, Enthusiasmus) und die Intention, im Beruf zu verbleiben, als Konsequenz professionel-ler Kompetenz. Es erfolgte eine Analyse der Daten aus der Pilotierungsstudie aus dem Jahr 2019 für den Bildungstrend des Instituts für Qualitätsentwicklung im Bildungswesen (IQB). Unterschiede zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften wurden erneut mittels multivariater Kovarianzanalysen berechnet.
Abschließend erfolgte in Teilstudie III die Untersuchung von Unterschieden in der Un-terrichtsqualität zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften als Konsequenz professioneller Kompetenz. Hierzu wurden Daten des IQB-Bildungstrends 2018 im Rahmen einer Sekundäranalyse mithilfe doppelt-latenter Mehrebenenanalysen genutzt. Es wurden die Unterschiede in den Bereichen Abwesenheit von Störungen, kognitive Akti-vierung und Schüler:innenunterstützung betrachtet.
Im finalen Kapitel der vorliegenden Arbeiten werden die zentralen Befunde der drei Teilstudien zusammengefasst und diskutiert. Die Ergebnisse weisen darauf hin, dass sich traditionell und nicht-traditionell ausgebildete Lehrkräfte nur in wenigen der untersuchten Aspekte signifikant voneinander unterscheiden. Nicht-traditionell ausgebildete Lehrkräfte verfügen über weniger pädagogisches Wissen, haben bessere selbstregulative Fähigkeiten und unterscheiden sich nicht in ihren Berufswahlmotiven, ihrem Wohlbefinden und in der Unterrichtsqualität von traditionell ausgebildeten Lehrkräften. Die Ergebnisse öffnen die Tür für die Diskussion der Relevanz des traditionellen Lehramtsstudiums, bieten eine Grundlage bzgl. der Implikationen für weiterführende Forschungsarbeiten und die Bildungspolitik. Die Arbeiten werden abschließend hinsichtlich ihrer Grenzen bewertet.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
The present thesis focuses on the synthesis of nanostructured iron-based compounds by using β-FeOOH nanospindles and poly(ionic liquid)s (PILs) vesicles as hard and soft templates, respectively, to suppress the shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. Three types of composites with different nanostructures (mesoporous nanospindle, yolk-shell nanospindle, and nanocapsule) have been synthesized and applied as sulfur host material for Li-S batteries. Their interactions with LiPSs and effects on the electrochemical performance of Li-S batteries have been systematically studied.
In the first part of the thesis, carbon-coated mesoporous Fe3O4 (C@M-Fe3O4) nanospindles have been synthesized to suppress the shuttle effect of LiPSs. First, β-FeOOH nanospindles have been synthesized via the hydrolysis of iron (III) chloride in aqueous solution and after silica coating and subsequent calcination, mesoporous Fe2O3 (M-Fe2O3) have been obtained inside the confined silica layer through pyrolysis of β-FeOOH. After the removal of the silica layer, electron tomography (ET) has been applied to rebuild the 3D structure of the M-Fe2O3 nanospindles. After coating a thin layer of polydopamine (PDA) as carbon source, the PDA-coated M-Fe2O3 particles have been calcinated to synthesize C@M-Fe3O4 nanospindles. With the chemisorption of Fe3O4 and confinement of mesoporous structure to anchor LiPSs, the composite C@M-Fe3O4/S electrode delivers a remaining capacity of 507.7 mAh g-1 at 1 C after 600 cycles.
In the second part of the thesis, a series of iron-based compounds (Fe3O4, FeS2, and FeS) with the same yolk-shell nanospindle morphology have been synthesized, which allows for the direct comparison of the effects of compositions on the electrochemical performance of Li-S batteries. The Fe3O4-carbon yolk-shell nanospindles have been synthesized by using the β-FeOOH nanospindles as hard template. Afterwards, Fe3O4-carbon yolk-shell nanospindles have been used as precursors to obtain iron sulfides (FeS and FeS2)-carbon yolk-shell nanospindles through sulfidation at different temperatures. Using the three types of yolk-shell nanospindles as sulfur host, the effects of compositions on interactions with LiPSs and electrochemical performance in Li-S batteries have been systematically investigated and compared. Benefiting from the chemisorption and catalytic effect of FeS2 particles and the physical confinement of the carbon shell, the FeS2-C/S electrode exhibits the best electrochemical performance with an initial specific discharge capacity of 877.6 mAh g-1 at 0.5 C and a retention ratio of 86.7% after 350 cycles.
In the third part, PILs vesicles have been used as soft template to synthesize carbon nanocapsules embedded with iron nitride particles to immobilize and catalyze LiPSs in Li-S batteries. First, 3-n-decyl-1-vinylimidazolium bromide has been used as monomer to synthesize PILs nanovesicles by free radical polymerization. Assisted by PDA coating route and ion exchange, PIL nanovesicles have been successfully applied as soft template in morphology-maintaining carbonization to prepare carbon nanocapsules embedded with iron nitride nanoparticles (FexN@C). The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electrical conductivity and strong chemical binding to LiPSs. The constructed FexN@C/S cathode demonstrates a high initial discharge capacity of 1085.0 mAh g-1 at 0.5 C with a remaining value of 930.0 mAh g-1 after 200 cycles.
The results in the present thesis demonstrate the facile synthetic routes of nanostructured iron-based compounds with controllable morphologies and compositions using soft and hard colloidal templates, which can be applied as sulfur host to suppress the shuttle behavior of LiPSs. The synthesis approaches developed in this thesis are also applicable to fabricating other transition metal-based compounds with porous nanostructures for other applications.
Complex emulsions are dispersions of kinetically stabilized multiphasic emulsion droplets comprised of two or more immiscible liquids that provide a novel material platform for the generation of active and dynamic soft materials. In recent years, the intrinsic reconfigurable morphological behavior of complex emulsions, which can be attributed to the unique force equilibrium between the interfacial tensions acting at the various interfaces, has become of fundamental and applied interest. As such, particularly biphasic Janus droplets have been investigated as structural templates for the generation of anisotropic precision objects, dynamic optical elements or as transducers and signal amplifiers in chemo- and bio-sensing applications. In the present thesis, switchable internal morphological responses of complex droplets triggered by stimuli-induced alterations of the balance of interfacial tensions have been explored as a universal building block for the design of multiresponsive, active, and adaptive liquid colloidal systems. A series of underlying principles and mechanisms that influence the equilibrium of interfacial tensions have been uncovered, which allowed the targeted design of emulsion bodies that can alter their shape, bind and roll on surfaces, or change their geometrical shape in response to chemical stimuli. Consequently, combinations of the unique triggerable behavior of Janus droplets with designer surfactants, such as a stimuli-responsive photosurfactant (AzoTAB) resulted for instance in shape-changing soft colloids that exhibited a jellyfish inspired buoyant motion behavior, holding great promise for the design of biological inspired active material architectures and transformable soft robotics.
In situ observations of spherical Janus emulsion droplets using a customized side-view microscopic imaging setup with accompanying pendant dropt measurements disclosed the sensitivity regime of the unique chemical-morphological coupling inside complex emulsions and enabled the recording of calibration curves for the extraction of critical parameters of surfactant effectiveness. The deduced new "responsive drop" method permitted a convenient and cost-efficient quantification and comparison of the critical micelle concentrations (CMCs) and effectiveness of various cationic, anionic, and nonionic surfactants. Moreover, the method allowed insightful characterization of stimuli-responsive surfactants and monitoring of the impact of inorganic salts on the CMC and surfactant effectiveness of ionic and nonionic surfactants. Droplet functionalization with synthetic crown ether surfactants yielded a synthetically minimal material platform capable of autonomous and reversible adaptation to its chemical environment through different supramolecular host-guest recognition events. Addition of metal or ammonium salts resulted in the uptake of the resulting hydrophobic complexes to the hydrocarbon hemisphere, whereas addition of hydrophilic ammonium compounds such as amino acids or polypeptides resulted in supramolecular assemblies at the hydrocarbon-water interface of the droplets. The multiresponsive material platform enabled interfacial complexation and
thus triggered responses of the droplets to a variety of chemical triggers including metal ions, ammonium compounds, amino acids, antibodies, carbohydrates as well as amino-functionalized solid surfaces.
In the final chapter, the first documented optical logic gates and combinatorial logic circuits based on complex emulsions are presented. More specifically, the unique reconfigurable and multiresponsive properties of complex emulsions were exploited to realize droplet-based logic gates of varying complexity using different stimuli-responsive surfactants in combination with diverse readout methods. In summary, different designs for multiresponsive, active, and adaptive liquid colloidal systems were presented and investigated, enabling the design of novel transformative chemo-intelligent soft material platforms.
Distributed decision-making studies the choices made among a group of interactive and self-interested agents. Specifically, this thesis is concerned with the optimal sequence of choices an agent makes as it tries to maximize its achievement on one or multiple objectives in the dynamic environment. The optimization of distributed decision-making is important in many real-life applications, e.g., resource allocation (of products, energy, bandwidth, computing power, etc.) and robotics (heterogeneous agent cooperation on games or tasks), in various fields such as vehicular network, Internet of Things, smart grid, etc.
This thesis proposes three multi-agent reinforcement learning algorithms combined with game-theoretic tools to study strategic interaction between decision makers, using resource allocation in vehicular network as an example. Specifically, the thesis designs an interaction mechanism based on second-price auction, incentivizes the agents to maximize multiple short-term and long-term, individual and system objectives, and simulates a dynamic environment with realistic mobility data to evaluate algorithm performance and study agent behavior.
Theoretical results show that the mechanism has Nash equilibria, is a maximization of social welfare and Pareto optimal allocation of resources in a stationary environment. Empirical results show that in the dynamic environment, our proposed learning algorithms outperform state-of-the-art algorithms in single and multi-objective optimization, and demonstrate very good generalization property in significantly different environments. Specifically, with the long-term multi-objective learning algorithm, we demonstrate that by considering the long-term impact of decisions, as well as by incentivizing the agents with a system fairness reward, the agents achieve better results in both individual and system objectives, even when their objectives are private, randomized, and changing over time. Moreover, the agents show competitive behavior to maximize individual payoff when resource is scarce, and cooperative behavior in achieving a system objective when resource is abundant; they also learn the rules of the game, without prior knowledge, to overcome disadvantages in initial parameters (e.g., a lower budget).
To address practicality concerns, the thesis also provides several computational performance improvement methods, and tests the algorithm in a single-board computer. Results show the feasibility of online training and inference in milliseconds.
There are many potential future topics following this work. 1) The interaction mechanism can be modified into a double-auction, eliminating the auctioneer, resembling a completely distributed, ad hoc network; 2) the objectives are assumed to be independent in this thesis, there may be a more realistic assumption regarding correlation between objectives, such as a hierarchy of objectives; 3) current work limits information-sharing between agents, the setup befits applications with privacy requirements or sparse signaling; by allowing more information-sharing between the agents, the algorithms can be modified for more cooperative scenarios such as robotics.
Childhood compared to adolescence and adulthood is characterized by high neuroplasticity represented by accelerated cognitive maturation and rapid cognitive developmental trajectories. Natural growth, biological maturation and permanent interaction with the physical and social environment fosters motor and cognitive development in children. Of note, the promotion of physical activity, physical fitness, and motor skill learning at an early age is mandatory first, as these aspects are essential for a healthy development and an efficient functioning in everyday life across the life span and second, physical activity behaviors and lifestyle habits tend to track from childhood into adulthood.
The main objective of the present thesis was to optimize and deepen the knowledge of motor and cognitive performance in young children and to develop an effective and age-appropriate exercise program feasible for the implementation in kindergarten and preschool settings. A systematic review with meta-analysis was conducted to examine the effectiveness of fundamental movement skill and exercise interventions in healthy preschool-aged children. Further, the relation between measures of physical fitness (i.e., static balance, muscle strength, power, and coordination) and attention as one domain of cognitive performance in preschool-aged children was analyzed. Subsequently, effects of a strength-dominated kindergarten-based exercise program on physical fitness components (i.e., static balance, muscle strength, power, and coordination) and cognitive performance (i.e., attention) compared to a usual kindergarten curriculum was examined.
The systematic review included trials focusing on healthy young children in kindergarten or preschool settings that applied fundamental movement skill-enhancing intervention programs of at least 4 weeks and further reported standardized motor skill outcome measures for the intervention and the control group. Children aged 4-6 years from three kindergartens participated in the cross-sectional and the longitudinal study. Product-orientated measures were conducted for the assessment of muscle strength (i.e., handgrip strength), muscle power (i.e., standing long jump), balance (i.e., timed single-leg stand), coordination (hopping on right/left leg), and attentional span (i.e., “Konzentrations-Handlungsverfahren für Vorschulkinder” [concentration-action procedure for preschoolers]).
With regards to the scientific literature, exercise and fundamental movement skill interventions are an effective method to promote overall proficiency in motor skills (i.e., object control and locomotor skills) in preschool children particularly when conducted by external experts with a duration of 4 weeks to 5 months. Moreover, significant medium associations were found between the composite score of physical fitness and attention as well as between coordination separately and attention in children aged 4-6 years. A 10-weeks strength-dominated exercise program implemented in kindergarten and preschool settings by educated and trained kindergarten teachers revealed significant improvements for the standing long jump test and the Konzentrations-Handlungsverfahren of intervention children compared to children of the control group.
The findings of the present thesis imply that fundamental movement skill and exercise interventions improve motor skills (i.e., locomotor and object control skills). Nonetheless, more high-quality research is needed. Additionally, physical fitness, particularly high performance in complex fitness components (i.e., coordination measured with the hopping on one leg test), tend to predict attention in preschool age. Furthermore, an exercise program including strength-dominated exercises, fundamental movement skills and elements of gymnastics has a beneficial effect on jumping performance with a concomitant trend toward improvements in attentional capacity in healthy preschool children. Finally, it is recommended to start early with the integration of muscular fitness (i.e., muscle strength, muscle power, muscular endurance) next to coordination, agility, balance, and fundamental movement skill exercises into regular physical activity curriculums in kindergarten settings.
Supernova remnants are considered to be the primary sources of galactic cosmic rays. These cosmic rays are assumed to be accelerated by the diffusive shock acceleration mechanism, specifically at shocks in the remnants. Particularly in the core-collapse scenario, these supernova remnant shocks expand inside the wind-blown bubbles structured by massive progenitors during their lifetime. Therefore, the complex environment of wind bubbles can influence the particle acceleration and radiation from the remnants. Further, the evolution of massive stars depends on their Zero Age Main Sequence mass, rotation, and metallicity. Consequently, the structures of the wind bubbles generated during the lifetime of massive stars should be considerably different. Hence, the particle acceleration in the core-collapse supernova remnants should vary, not only from the remnants evolving in the uniform environment but also from one another, depending on their progenitor stars.
A core-collapse supernova remnant with a very massive 60 𝑀 ⊙ progenitor star has been considered to study the particle acceleration at the shock considering Bohm-like diffusion. This dissertation demonstrates the modification in particle acceleration and radiation while the remnant propagates through different regions of the wind bubble by impacts from the profiles of gas density, the temperature of the bubble and the magnetic field structure. Subsequently, in this thesis, I discuss the impacts of the non-identical ambient environment of core-collapse supernova remnants on particle spectra and the non-thermal emissions, considering 20 𝑀 ⊙ and 60 𝑀⊙ massive progenitors having different evolutionary tracks. Additionally, I also analyse the effect of cosmic ray streaming instabilities on particle spectra.
To model the particle acceleration in the remnants, I have performed simulations in one-dimensional spherical symmetry using RATPaC code. The transport equation for cosmic rays and magnetic turbulence in test-particle approximation, along with the induction equation for the evolution of the large-scale magnetic field, have been solved simultaneously with the hydrodynamic equations for the expansion of remnants inside the pre-supernova circumstellar medium.
The results from simulations describe that the spectra of accelerated particles in supernova remnants are regulated by density fluctuations, temperature variations, the large-scale magnetic field configuration and scattering turbulence. Although the diffusive shock acceleration mechanism at supernova remnant shock predicts the spectral index of 2 for the accelerated non-thermal particles, I have obtained the particle spectra that deviate from this prediction, in the core-collapse scenario. I have found that the particle spectral index reaches 2.5 for the supernova remnant with 60 𝑀 ⊙ progenitor when the remnant resides inside the shocked wind region of the wind bubble, and this softness persists at later evolutionary stages even with Bohm-like diffusion for accelerated particles. However, the supernova remnant with 20 𝑀 ⊙ progenitor does not demonstrate persistent softness in particle spectra from the influence of the hydrodynamics of the corresponding wind bubble. At later stages of evolution, the particle spectra illustrate softness at higher energies for both remnants as the consequence of the escape of high-energy particles from the remnants while considering the cosmic ray streaming instabilities. Finally, I have probed the emission morphology of remnants that varies depending on the progenitors, particularly in earlier evolutionary stages. This dissertation provides insight into different core-collapse remnants expanding inside wind bubbles, for instance, the calculated gamma-ray spectral index from the supernova remnant with 60 𝑀 ⊙ progenitor at later evolutionary stages is consistent with that of the observed supernova remnants expanding in dense molecular clouds.
Volcanic hazard assessment relies on physics-based models of hazards, such as lava flows and pyroclastic density currents, whose outcomes are very sensitive to the location where future eruptions will occur. On the contrary, forecast of vent opening locations in volcanic areas typically relies on purely data-driven approaches, where the spatial density of past eruptive vents informs the probability maps of future vent opening. Such techniques may be suboptimal in volcanic systems with missing or scarce data, and where the controls on magma pathways may change over time. An alternative approach was recently proposed, relying on a model of stress-driven pathways of magmatic dikes. In that approach, the crustal stress was optimized so that dike trajectories linked consistently the location of the magma chamber to that of past vents. The retrieved information on the stress state was then used to forecast future dike trajectories. The validation of such an approach requires extensive application to nature. Before doing so, however, several important limitations need to be removed, most importantly the two-dimensional (2D) character of the models and theoretical concepts. In this thesis, I develop methods and tools so that a physics-based strategy of stress inversion and eruptive vent forecast in volcanoes can be applied to three dimensional (3D) problems. In the first part, I test the stress inversion and vent forecast strategy on analog models, still within a 2D framework, but improving on the efficiency of the stress optimization. In the second part, I discuss how to correctly account for gravitational loading/unloading due to complex 3D topography with a Boundary-Element numerical model. Then, I develop a new, simplified but fast model of dike pathways in 3D, designed for running large numbers of simulations at minimal computational cost, and able to backtrack dike trajectories from vents on the surface. Finally, I combine the stress and dike models to simulate dike pathways in synthetic calderas. In the third part, I describe a framework of stress inversion and vent forecast strategy in 3D for calderas. The stress inversion relies on, first, describing the magma storage below a caldera in terms of a probability density function. Next, dike trajectories are backtracked from the known locations of past vents down through the crust, and the optimization algorithm seeks for the stress models which lead trajectories through the regions of highest probability. I apply the new strategy to the synthetic scenarios presented in the second part, and I exploit the results from the stress inversions to produce probability maps of future vent locations for some of those scenarios. In the fourth part, I present the inversion of different deformation source models applied to the ongoing ground deformation observed across the Rhenish Massif in Central Europe. The region includes the Eifel Volcanic Fields in Germany, a potential application case for the vent forecast strategy. The results show how the observed deformation may be due to melt accumulation in sub-horizontal structures in the lower crust or upper mantle. The thesis concludes with a discussion of the stress inversion and vent forecast strategy, its limitations and applicability to real volcanoes. Potential developments of the modeling tools and concepts presented here are also discussed, as well as possible applications to other geophysical problems.
International migration has been an increasing phenomenon during the past decades and has involved all the regions of the globe. Together with fertility and mortality rates, net migration rates represent the components that fully define the demographic evolution of the population in a country. Therefore, being able to capture the patterns of international migration flows and to produce projections of how they might change in the future is of relevant importance for demographic studies and for designing policies informed on the potential scenarios. Existing forecasting methods do not account explicitly for the main drivers and processes shaping international migration flows: existing migrant communities at the destination country, termed diasporas, would reduce the costs of migration and facilitate the settling for new migrants, ultimately producing a positive feedback; accounting for the heterogeneity in the type of migration flows, e.g. return and transit Ćows, becomes critical in some specific bilateral migration channels; in low- to middle- income countries economic development could relax poverty constraint and result in an increase of emigration rates.
Economic conditions at both origin and destination are identified as major drivers of international migration. At the same time, climate change impacts have already appeared on natural and human-made systems such as the economic productivity. These economic impacts might have already produced a measurable effect on international migration flows. Studies that provide a quantification of the number of migration moves that might have been affected by climate change are usually specific to small regions, do not provide a mechanistic understanding of the pathway leading from climate change to migration and restrict their focus to the effective induced flows, disregarding the impact that climate change might have had in inhibiting other flows.
Global climate change is likely to produce impacts on the economic development of the countries during the next decades too. Understanding how these impacts might alter future global migration patterns is relevant for preparing future societies and understanding whether the response in migration flows would reduce or increase population's exposure to climate change impacts.
This doctoral research aims at investigating these questions and fill the research gaps outlined above. First, I have built a global bilateral international migration model which accounts explicitly for the diaspora feedback, distinguishes between transit and return flows, and accounts for the observed non-linear effects that link emigration rates to income levels in the country of origin. I have used this migration model within a population dynamic model where I account also for fertility and mortality rates, producing hindcasts and future projections of international migration flows, covering more than 170 countries. Results show that the model reproduces past patterns and trends well. Future projections highlight the fact that,depending on the assumptions regarding future evolution of income levels and between-country inequality, migration at the end of the century might approach net zero or be still high in many countries. The model, parsimonious in the explanatory variables that includes, represents a versatile tool for assessing the impacts of different socioeconomic scenarios on international migration.
I consider then a counterfactual past without climate change impacts on the economic productivity. By prescribing these counterfactual economic conditions to the migration model I produce counterfactual migration flows for the past 30 years. I compare the counterfactual migration flows to factual ones, where historical economic conditions are used to produce migration flows. This provides an estimation of the recent international migration flows attributed to climate change impacts. Results show that a counterfactual world without climate change would have seen less migration globally. This effect becomes larger if I consider separately the increase and decrease in migration moves: a Ągure of net change in the migration flows is not representative of the effective magnitude of the climate change impact on migration. Indeed, in my results climate change produces a divergent effect on richer and poorer countries: by slowing down the economic development, climate change might have reduced international mobility from and to countries of the Global South, and increased it from and to richer countries in the Global North.
I apply the same methodology to a scenario of future 3℃ global warming above pre-industrial conditions. I Ąnd that climate change impacts, acting by reorganizing the relative economic attractiveness of destination countries or by affecting the economic growth in the origin, might produce a substantial effect in international migration flows, inhibiting some moves and inducing others.
Overall my results suggest that climate change might have had and might have in the future a significant effect on global patterns of international migration. It also emerges clearly that, for a comprehensive understanding of the effects of climate change on international migration, we need to go beyond net effects and consider separately induced and inhibited flows.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
Animal movement is a crucial aspect of life, influencing ecological and evolutionary processes. It plays an important role in shaping biodiversity patterns, connecting habitats and ecosystems. Anthropogenic landscape changes, such as in agricultural environments, can impede the movement of animals by affecting their ability to locate resources during recurring movements within home ranges and, on a larger scale, disrupt migration or dispersal. Inevitably, these changes in movement behavior have far-reaching consequences on the mobile link functions provided by species inhabiting such extensively altered matrix areas. In this thesis, I investigate the movement characteristics and activity patterns of the European hare (Lepus europaeus), aiming to understand their significance as a pivotal species in fragmented agricultural landscapes. I reveal intriguing results that shed light on the importance of hares for seed dispersal, the influence of personality traits on behavior and space use, the sensitivity of hares to extreme weather conditions, and the impacts of GPS collaring on mammals' activity patterns and movement behavior.
In Chapter I, I conducted a controlled feeding experiment to investigate the potential impact of hares on seed dispersal. By additionally utilizing GPS data of hares in two contrasting landscapes, I demonstrated that hares play a vital role, acting as effective mobile linkers for many plant species in small and isolated habitat patches. The analysis of seed intake and germination success revealed that distinct seed traits, such as density, surface area, and shape, profoundly affect hares' ability to disperse seeds through endozoochory. These findings highlight the interplay between hares and plant communities and thus provide valuable insights into seed dispersal mechanisms in fragmented landscapes.
By employing standardized behavioral tests in Chapter II, I revealed consistent behavioral responses among captive hares while simultaneously examining the intricate connection between personality traits and spatial patterns within wild hare populations. This analysis provides insights into the ecological interactions and dynamics within hare populations in agricultural habitats. Examining the concept of animal personality, I established a link between personality traits and hare behavior. I showed that boldness, measured through standardized tests, influences individual exploration styles, with shy and bold hares exhibiting distinct space use patterns. In addition to providing valuable insights into the role of animal personality in heterogeneous environments, my research introduced a novel approach demonstrating the feasibility of remotely assessing personality types using animal-borne sensors without additional disturbance of the focal individual.
While climate conditions severely impact the activity and, consequently, the fitness of wildlife species across the globe, in Chapter III, I uncovered the sensitivity of hares to temperature, humidity, and wind speed during their peak reproduction period. I found a strong response in activity to high temperatures above 25°C, with a particularly pronounced effect during temperature extremes of over 35°C. The non-linear relationship between temperature and activity was characterized by contrasting responses observed for day and night. These findings emphasize the vulnerability of hares to climate change and the potential consequences for their fitness and population dynamics with the ongoing rise of temperature.
Since such insights can only be obtained through capturing and tagging free-ranging animals, I assessed potential impacts and the recovery process post-collar attachment in Chapter IV. For this purpose, I examined the daily distances moved and the temporal-associated activity of 1451 terrestrial mammals out of 42 species during their initial tracking period. The disturbance intensity and the speed of recovery varied across species, with herbivores, females, and individuals captured and collared in relatively secluded study areas experiencing more pronounced disturbances due to limited anthropogenic influences.
Mobile linkers are essential for maintaining biodiversity as they influence the dynamics and resilience of ecosystems. Furthermore, their ability to move through fragmented landscapes makes them a key component for restoring disturbed sites. Individual movement decisions determine the scale of mobile links, and understanding variations in space use among individuals is crucial for interpreting their functions. Climate change poses further challenges, with wildlife species expected to adjust their behavior, especially in response to high-temperature extremes, and comprehending the anthropogenic influence on animal movements will remain paramount to effective land use planning and the development of successful conservation strategies.
This thesis provides a comprehensive ecological understanding of hares in agricultural landscapes. My research findings underscore the importance of hares as mobile linkers, the influence of personality traits on behavior and spatial patterns, the vulnerability of hares to extreme weather conditions, and the immediate consequences of collar attachment on mammalian movements. Thus, I contribute valuable insights to wildlife conservation and management efforts, aiding in developing strategies to mitigate the impact of environmental changes on hare populations. Moreover, these findings enable the development of methodologies aimed at minimizing the impacts of collaring while also identifying potential biases in the data, thereby benefiting both animal welfare and the scientific integrity of localization studies.
Reliable and robust data processing is one of the hardest requirements for systems in fields such as medicine, security, automotive, aviation, and space, to prevent critical system failures caused by changes in operating or environmental conditions. In particular, Signal Integrity (SI) effects such as crosstalk may distort the signal information in sensitive mixed-signal designs. A challenge for hardware systems used in the space are radiation effects. Namely, Single Event Effects (SEEs) induced by high-energy particle hits may lead to faulty computation, corrupted configuration settings, undesired system behavior, or even total malfunction.
Since these applications require an extra effort in design and implementation, it is beneficial to master the standard cell design process and corresponding design flow methodologies optimized for such challenges. Especially for reliable, low-noise differential signaling logic such as Current Mode Logic (CML), a digital design flow is an orthogonal approach compared to traditional manual design. As a consequence, mandatory preliminary considerations need to be addressed in more detail. First of all, standard cell library concepts with suitable cell extensions for reliable systems and robust space applications have to be elaborated. Resulting design concepts at the cell level should enable the logical synthesis for differential logic design or improve the radiation-hardness. In parallel, the main objectives of the proposed cell architectures are to reduce the occupied area, power, and delay overhead. Second, a special setup for standard cell characterization is additionally required for a proper and accurate logic gate modeling. Last but not least, design methodologies for mandatory design flow stages such as logic synthesis and place and route need to be developed for the respective hardware systems to keep the reliability or the radiation-hardness at an acceptable level.
This Thesis proposes and investigates standard cell-based design methodologies and techniques for reliable and robust hardware systems implemented in a conventional semi-conductor technology. The focus of this work is on reliable differential logic design and robust radiation-hardening-by-design circuits. The synergistic connections of the digital design flow stages are systematically addressed for these two types of hardware systems. In more detail, a library for differential logic is extended with single-ended pseudo-gates for intermediate design steps to support the logic synthesis and layout generation with commercial Computer-Aided Design (CAD) tools. Special cell layouts are proposed to relax signal routing. A library set for space applications is similarly extended by novel Radiation-Hardening-by-Design (RHBD) Triple Modular Redundancy (TMR) cells, enabling a one fault correction. Therein, additional optimized architectures for glitch filter cells, robust scannable and self-correcting flip-flops, and clock-gates are proposed. The circuit concepts and the physical layout representation views of the differential logic gates and the RHBD cells are discussed. However, the quality of results of designs depends implicitly on the accuracy of the standard cell characterization which is examined for both types therefore. The entire design flow is elaborated from the hardware design description to the layout representations. A 2-Phase routing approach together with an intermediate design conversion step is proposed after the initial place and route stage for reliable, pure differential designs, whereas a special constraining for RHBD applications in a standard technology is presented.
The digital design flow for differential logic design is successfully demonstrated on a reliable differential bipolar CML application. A balanced routing result of its differential signal pairs is obtained by the proposed 2-Phase-routing approach. Moreover, the elaborated standard cell concepts and design methodology for RHBD circuits are applied to the digital part of a 7.5-15.5 MSPS 14-bit Analog-to-Digital Converter (ADC) and a complex microcontroller architecture. The ADC is implemented in an unhardened standard semiconductor technology and successfully verified by electrical measurements. The overhead of the proposed hardening approach is additionally evaluated by design exploration of the microcontroller application. Furthermore, the first obtained related measurement results of novel RHBD-∆TMR flip-flops show a radiation-tolerance up to a threshold Linear Energy Transfer (LET) of 46.1, 52.0, and 62.5 MeV cm2 mg-1 and savings in silicon area of 25-50 % for selected TMR standard cell candidates.
As a conclusion, the presented design concepts at the cell and library levels, as well as the design flow modifications are adaptable and transferable to other technology nodes. In particular, the design of hybrid solutions with integrated reliable differential logic modules together with robust radiation-tolerant circuit parts is enabled by the standard cell concepts and design methods proposed in this work.
Amoeboid cell motility takes place in a variety of biomedical processes such as cancer metastasis, embryonic morphogenesis, and wound healing. In contrast to other forms of cell motility, it is mainly driven by substantial cell shape changes. Based on the interplay of explorative membrane protrusions at the front and a slower-acting membrane retraction at the rear, the cell moves in a crawling kind of way. Underlying these protrusions and retractions are multiple physiological processes resulting in changes of the cytoskeleton, a meshwork of different multi-functional proteins. The complexity and versatility of amoeboid cell motility raise the need for novel computational models based on a profound theoretical framework to analyze and simulate the dynamics of the cell shape.
The objective of this thesis is the development of (i) a mathematical framework to describe contour dynamics in time and space, (ii) a computational model to infer expansion and retraction characteristics of individual cell tracks and to produce realistic contour dynamics, (iii) and a complementing Open Science approach to make the above methods fully accessible and easy to use.
In this work, we mainly used single-cell recordings of the model organism Dictyostelium discoideum. Based on stacks of segmented microscopy images, we apply a Bayesian approach to obtain smooth representations of the cell membrane, so-called cell contours. We introduce a one-parameter family of regularized contour flows to track reference points on the contour (virtual markers) in time and space. This way, we define a coordinate system to visualize local geometric and dynamic quantities of individual contour dynamics in so-called kymograph plots. In particular, we introduce the local marker dispersion as a measure to identify membrane protrusions and retractions in a fully automated way.
This mathematical framework is the basis of a novel contour dynamics model, which consists of three biophysiologically motivated components: one stochastic term, accounting for membrane protrusions, and two deterministic terms to control the shape and area of the contour, which account for membrane retractions. Our model provides a fully automated approach to infer protrusion and retraction characteristics from experimental cell tracks while being also capable of simulating realistic and qualitatively different contour dynamics. Furthermore, the model is used to classify two different locomotion types: the amoeboid and a so-called fan-shaped type.
With the complementing Open Science approach, we ensure a high standard regarding the usability of our methods and the reproducibility of our research. In this context, we introduce our software publication named AmoePy, an open-source Python package to segment, analyze, and simulate amoeboid cell motility. Furthermore, we describe measures to improve its usability and extensibility, e.g., by detailed run instructions and an automatically generated source code documentation, and to ensure its functionality and stability, e.g., by automatic software tests, data validation, and a hierarchical package structure.
The mathematical approaches of this work provide substantial improvements regarding the modeling and analysis of amoeboid cell motility. We deem the above methods, due to their generalized nature, to be of greater value for other scientific applications, e.g., varying organisms and experimental setups or the transition from unicellular to multicellular movement. Furthermore, we enable other researchers from different fields, i.e., mathematics, biophysics, and medicine, to apply our mathematical methods. By following Open Science standards, this work is of greater value for the cell migration community and a potential role model for other Open Science contributions.
Inflammatory bowel diseases (IBD), characterised by a chronic inflammation of the gut wall, develop as consequence of an overreacting immune response to commensal bacteria, caused by a combination of genetic and environmental conditions. Large inter-individual differences in the outcome of currently available therapies complicate the decision for the best option for an individual patient. Predicting the prospects of therapeutic success for an individual patient is currently only possible to a limited extent; for this, a better understanding of possible differences between responders and non-responders is needed.
In this thesis, we have developed a mathematical model describing the most important processes of the gut mucosal immune system on the cellular level. The model is based on literature data, which were on the one hand used (qualitatively) to choose which cell types and processes to incorporate and to derive the model structure, and on the other hand (quantitatively) to derive the parameter values. Using ordinary differential equations, it describes the concentration-time course of neutrophils, macrophages, dendritic cells, T cells and bacteria, each subdivided into different cell types and activation states, in the lamina propria and mesenteric lymph nodes. We evaluate the model by means of simulations of the healthy immune response to salmonella infection and mucosal injury.
A virtual population includes IBD patients, which we define through their initially asymptomatic, but after a trigger chronically inflamed gut wall. We demonstrate the model's usefulness in different analyses: (i) The comparison of virtual IBD patients with virtual healthy individuals shows that the disease is elicited by many small or fewer large changes, and allows to make hypotheses about dispositions relevant for development of the disease. (ii) We simulate the effects of different therapeutic targets and make predictions about the therapeutic outcome based on the pre-treatment state. (iii) From the analysis of differences between virtual responders and non-responders, we derive hypotheses about reasons for the inter-individual variability in treatment outcome. (iv) For the example of anti-TNF-alpha therapy, we analyse, which alternative therapies are most promising in case of therapeutic failure, and which therapies are most suited for combination therapies: For drugs also directly targeting the cytokine levels or inhibiting the recruitment of innate immune cells, we predict a low probability of success when used as alternative treatment, but a large gain when used in a combination treatment. For drugs with direct effects on T cells, via modulation of the sphingosine-1-phosphate receptor or inhibition of T cell proliferation, we predict a considerably larger probability of success when used as alternative treatment, but only a small additional gain when used in a combination therapy.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
The first part of the thesis studies the properties of fast mode in magneto hydro-dynamic (MHD) turbulence. 1D and 3D numerical simulations are carried out to generate decaying fast mode MHD turbulence. The injection of waves are carried out in a collinear and isotropic fashion to generate fast mode turbulence. The properties of fast mode turbulence are analyzed by studying their energy spectral density, 2D structure functions and energy decay/cascade time. The injection wave vector is varied to study the dependence of the above properties on the injection wave vectors. The 1D energy spectrum obtained for the velocity and magnetic fields has 𝐸 (𝑘) ∝ 𝑘−2. The 2D energy spectrum and 2D structure functions in parallel and perpendicular directions shows that fast mode turbulence generated is isotropic in nature. The cascade/decay rate of fast mode MHD turbulence is proportional to 𝑘−0.5 for different kinds of wave vector injection. Simulations are also carried out in 1D and 3D to compare balanced and imbalanced turbulence. The results obtained shows that while 1D imbalanced turbulence decays faster than 1D balanced turbulence, there is no difference in the decay of 3D balanced and imbalanced turbulence for the current resolution of 512 grid points.
"The second part of the thesis studies cosmic ray (CR) transport in driven MHD turbulence and is strongly dependent on it’s properties. Test particle simulations are carried out to study CR interaction with both total MHD turbulence and decomposed MHD modes. The spatial diffusion coefficients and the pitch angle scattering diffusion coefficients are calculated from the test particle trajectories in turbulence. The results confirms that the fast modes dominate the CR propagation, whereas Alfvén, slow modes are much less efficient with similar pitch angle scattering rates. The cross field transport on large and small scales are investigated next. On large/global scales, normal diffusion is observed and the diffusion coefficient is suppressed by 𝑀𝜁𝐴 compared to the parallel diffusion coefficients, with 𝜁 closer to 4 in Alfvén modes than that in total turbulence as theoretically expected. For the CR transport on scales smaller than the turbulence injection scale 𝐿, both the local and global magnetic reference frames are adopted. Super diffusion is observed on such small scales in all the cases. Particularly, CR transport in Alfvén modes show clear Richardson diffusion in the local reference frame. The diffusion transition smoothly from the Richardson’s one with index 1.5 to normal diffusion as particle’s mean free path decreases from 𝜆∥ ≫ 𝐿 to 𝜆∥ ≪ 𝐿. These results have broad applications to CRs in various astrophysical environments".
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
The Security Operations Center (SOC) represents a specialized unit responsible for managing security within enterprises. To aid in its responsibilities, the SOC relies heavily on a Security Information and Event Management (SIEM) system that functions as a centralized repository for all security-related data, providing a comprehensive view of the organization's security posture. Due to the ability to offer such insights, SIEMS are considered indispensable tools facilitating SOC functions, such as monitoring, threat detection, and incident response.
Despite advancements in big data architectures and analytics, most SIEMs fall short of keeping pace. Architecturally, they function merely as log search engines, lacking the support for distributed large-scale analytics. Analytically, they rely on rule-based correlation, neglecting the adoption of more advanced data science and machine learning techniques.
This thesis first proposes a blueprint for next-generation SIEM systems that emphasize distributed processing and multi-layered storage to enable data mining at a big data scale. Next, with the architectural support, it introduces two data mining approaches for advanced threat detection as part of SOC operations.
First, a novel graph mining technique that formulates threat detection within the SIEM system as a large-scale graph mining and inference problem, built on the principles of guilt-by-association and exempt-by-reputation. The approach entails the construction of a Heterogeneous Information Network (HIN) that models shared characteristics and associations among entities extracted from SIEM-related events/logs. Thereon, a novel graph-based inference algorithm is used to infer a node's maliciousness score based on its associations with other entities in the HIN. Second, an innovative outlier detection technique that imitates a SOC analyst's reasoning process to find anomalies/outliers. The approach emphasizes explainability and simplicity, achieved by combining the output of simple context-aware univariate submodels that calculate an outlier score for each entry.
Both approaches were tested in academic and real-world settings, demonstrating high performance when compared to other algorithms as well as practicality alongside a large enterprise's SIEM system.
This thesis establishes the foundation for next-generation SIEM systems that can enhance today's SOCs and facilitate the transition from human-centric to data-driven security operations.
Este trabajo pretende demostrar que en la obra narrativa del escritor Tomás Carrasquilla Naranjo (1858 - 1940) hay un Wahrheitsgehalt (Benjamin, 2012), la concreción temporal de una idea, que se materializa a través de lo que aquí he denominado imagen de la religiosidad popular. Esto quiere decir que la obra del antioqueño estaría construida a la manera de un gran mosaico, en el que pese a los variados y disparejos elementos que la componen, la unión de todos produce una imagen (Bild). En dicha imagen se representa la experiencia histórica de lo moderno en los sectores populares, a partir de la unión fugaz entre los rezagos de tradiciones vetustas y las formas de vida más novedosas. Lejos de las convenciones de su época, donde la pregunta por la experiencia de lo moderno redunda en los ámbitos metropolitanos y el papel del artista, Carrasquilla se pregunta por lo que ocurre en los extensos ámbitos rurales o liminares entre lo citadino y lo rural, y sus respectivos entrecruzamientos. Los sujetos que habitan estos ámbitos, al carecer de herramientas conceptuales que les permita definir esta nueva “experiencia viviente”, esa nueva Structures and Feeling como la denomina Raymond Williams (2019); apelan a lo único que conocen, los vetustos saberes transmitidos oralmente para explicar su ahora.
En este sentido, es posible afirmar que Carrasquilla, valiéndose de esta imagen de la religiosidad popular, intentó establecer un diálogo en el campo de lo literario, desde el que postuló una idea de lo moderno diferencial. En varias ocasiones, el antioqueño manifestó que la literatura debía incorporar las experiencias locales al diálogo de lo universal. Ejemplo de esto es el símil de la literatura con el sistema planetario, pues, según él, las relaciones de jerarquía se establecen cuando los países que producen modas literarias, los planetas (Europa), relegan a los otros a ser simples satélites, es decir, a imitar (Carrasquilla, 1991). Hoy en día, se aprecia en aquella crítica dirigida a sus paisanos, los modernistas antioqueños, una reivindicación de la alteridad. Por lo que aquí se postula, que si bien dichas vivencias, no son similares a las que se dan en los nacientes ámbitos metropolitanos, donde las mercancías representan a los nuevos sustitutos de la fe; en esos extensos ámbitos, en apariencia provincianos y alejados del contacto con otras culturas y saberes, la imagen de religiosidad popular viene a desempeñar el mismo papel que aquellas. En otras palabras, “indem an Dingen ihr Gebrauchswert abstirbt” (utilidad o adoración), la subjetividad del personaje las carga con “Intentionen von Wunsch und Angst” (Benjamin, 2013a.), convirtiéndolas en objetos de contemplación, bien sea portándolas o coleccionándolas. De manera similar Carrasquilla se habría valido del cúmulo de saberes (Wissen) residuales de su hipotético público lector, heredado de diversas áreas culturales -durante el proceso de la colonización-, sus respectivos y heterogéneos tiempos y lenguas particulares (Ette, 2019), para aunarlos a las experiencias profanas actuales. Así, la obra (cuento o novela) representaría artísticamente “formas de vida” popular, a través de las cuales se “experimenta estéticamente” cómo se sobrevive (überleben) (Ette, 2015) a la modernidad en los sectores marginados. Es decir, solo desde lo vetusto y ruinoso de la religiosidad popular, otrora sagrado, es posible explicar la experiencia de lo moderno, su aquí y ahora.
Digitale und gesellschaftliche Entwicklungen fordern kontinuierliche Weiterbildung für Mitarbeiter im Vertrieb. Es halten sich in dieser Berufssparte aber immer noch einige Mythen zum Training von Vertriebsmitarbeitern. Unter anderem deshalb wurde in der Vergangenheit der Trainingsbedarf im Vertrieb stark vernachlässigt. Die Arbeit befasst sich deshalb zunächst mit der Frage, wie der Vertrieb in Deutschland aktuell geschult wird (unter Einbezug der Corona-Pandemie) und ob sich aus den Trainingsgewohnheiten erste Hinweise zur Erlangung eines strategischen Wettbewerbsvorteils ergeben könnten.
Dabei greift die Arbeit auf, dass Investitionen in das Training von Vertriebsmitarbeitern eine Anlage in die Wettbewerbsfähigkeit des Unternehmens sein könnten. Automatisierte Trainings, beispielsweise basierend auf Virtual Reality (VR) und Künstlicher Intelligenz (KI), könnten in der Aus- und Weiterbildung des Vertriebs einen effizienten Beitrag in der Sicherstellung eines strategischen Wettbewerbsvorteils leisten. Durch weitere Forschungsfragen befasst sich die Arbeit anschließend damit, wie ein automatisiertes Vertriebstraining mit KI- und VR-Inhalten unter Einbeziehung der Nutzer gestaltet werden muss, um Vertriebsmitarbeiter in einem dafür ausgewählten Verhandlungskontext zu trainieren. Dazu wird eine Anwendung mit Hilfe von Virtual Reality und Künstlicher Intelligenz in einem Verhandlungsdialog entwickelt, getestet und evaluiert.
Die vorliegende Arbeit liefert eine Basis für die Automatisierung von Vertriebstrainings und im erweiterten Sinne für Trainings im Allgemeinen.
Kraft und Kognition
(2023)
Die in den letzten Jahren aus Querschnittstudien gewonnenen empirischen Erkenntnisse deuten auf einen Zusammenhang zwischen muskulärer Kraftleistungsfähigkeit und kognitiver Leistungsfähigkeit hin [10]. Diese Beobachtung wird von Längsschnittstudien gestützt, bei denen in Folge gezielter Krafttrainingsinterventionen, welche typischerweise zur Steigerung der muskulären Kraftleistungsfähigkeit führen, Verbesserungen der kognitiven Leistungsfähigkeit dokumentiert werden konnten [11]. Die zugrundeliegenden Mechanismen, die den Zusammenhang zwischen muskulärer Kraftleistungsfähigkeit und kognitiver Leistungsfähigkeit begründen, sind jedoch noch nicht vollständig bekannt und bedürfen weiterer Forschung [10,12]. Vor diesem Hintergrund hatten die im Rahmen dieser Dissertation durchgeführten Forschungsarbeiten das übergeordnete Ziel, die Mechanismen zu untersuchen, welche den Zusammenhang zwischen der muskulären Kraftleistungsfähigkeit und der kognitiven Leistungsfähigkeit erklären können. In dieser Arbeit wurden dazu unterschiedliche Populationen (junge Menschen und ältere Menschen ohne und mit leichten kognitiven Störungen) unter Anwendung verschiedener untersuchungsmethodischer Ansätze (systematische Literaturrecherche, Doppelaufgabenparadigma und funktionelle Nahinfrarotspektroskopie) untersucht. Aufgrund der im Rahmen dieser Dissertation durchgeführten Forschungsarbeiten, die konsekutiv aufeinander aufbauen, konnten folgende Haupterkenntnisse gewonnen werden:
• Um einen umfassenden Überblick über die aktuelle Evidenzlage zum Thema Kraftleistungsfähigkeit und kognitiver Leistungsfähigkeit sowie den zugrundeliegenden neuronalen Korrelaten zu erlangen, wurde eine systematische Literaturrecherche zu diesem Forschungsthema durchgeführt. Die Ergebnisse dieser systematischen Literaturrecherche dokumentieren, dass ein gezieltes Krafttraining neben der Steigerung der kognitiven Leistungsfähigkeit zu funktionellen und strukturellen Veränderungen des Gehirns, insbesondere in frontalen Gehirnregionen, führen kann [13]. Ferner zeigen die Ergebnisse dieser systematischen Literaturrecherche, bei der eine begrenzte Anzahl verfügbarer Studien (n = 18) identifiziert wurde, den Bedarf weiterer Forschungsarbeiten zu diesem Themenfeld an [13].
• Zur Überprüfung der Hypothese, dass zur Ausführung von Krafttrainingsübungen höhere kognitive Prozesse benötigt werden, wurde in einer experimentellen Studie bei jüngeren gesunden Erwachsenen das Doppelaufgabenparadigma bei der Krafttrainingsübung Knie-beuge angewendet. Die in dieser Studie beobachteten Doppelaufgabenkosten bei der Ausführung der Krafttrainingsübung Kniebeuge (im Vergleich zur Kontrollbedingung Stehen) deuten auf die Beteiligung höherer kognitiver Prozesse zur Lösung dieser Bewegungsaufgabe hin und bestätigen die aufgestellte Hypothese [14].
• Um die Hypothese zu untersuchen, dass spezifische neuronale Korrelate (funktionelle Gehirnaktivität) den Zusammenhang zwischen muskulärer Kraftleistungsfähigkeit und kognitiver Leistungsfähigkeit vermitteln, wurde bei jungen gesunden Erwachsenen der Zusammenhang zwischen der Ausprägung der maximalen Handgriffkraft (normalisiert auf den Body-Mass-Index) und der kortikalen hämodynamischen Antwortreaktion untersucht, die bei der Durchführung eines standardisierten kognitiven Tests mittels funktioneller Nahinfrarotspektroskopie in präfrontalen Gehirnarealen gemessen wurde. Im Rahmen dieser Querschnittsstudie konnte die initiale Hypothese nicht vollständig bestätigt werden, da zwar Zusammenhänge zwischen maximaler Handgriffkraft und kognitiver Leistungsfähigkeit mit Parametern der hämodynamischen Antwortreaktion beobachtet wurden, aber die Ausprägung der maximalen Handgriffkraft nicht im Zusammenhang mit der Kurzeitgedächtnisleistung stand [16].
• Zur Untersuchung der Annahme, dass eine vorliegende neurologische Erkrankung (im Speziellen eine leichte kognitive Störung), die typischerweise mit Veränderungen von spezifischen neuronalen Korrelaten (z.B. des Hippokampus‘ [17-19] und des präfrontalen Kortex‘ [20,21]) einhergeht, einen Einfluss auf die Assoziation zwischen muskulärer Kraftleistungsfähigkeit und kognitiver Leistungsfähigkeit hat, wurde in einer Querschnittsstudie der Zusammenhang zwischen der Ausprägung der maximalen Handgriffkraft (normalisiert auf den Body-Mass-Index) und der Ausprägung der exekutiven Funktionen bei älteren Erwachsenen mit amnestischem und nicht-amnestischem Subtyp der leichten kognitiven Störung sowie gesunden älteren Erwachsenen untersucht. In dieser Querschnittsstudie wurde nur bei älteren Erwachsenen mit dem amnestischen Subtyp der leichten kognitiven Störung ein Zusammenhang zwischen maximaler Handgriffkraft und exekutiven Funktionen beobachtet. Solch eine Korrelation existiert jedoch nicht bei älteren Erwachsenen mit dem non-amnestischen Subtyp der leichten kognitiven Störung oder bei gesunden älteren Erwachsenen [24].
• In einem Perspektivenartikel wurde aufgezeigt, wie durch die theoriegeleitete Nutzung physiologischer Effekte, die bei einer speziellen Krafttrainingsmethode durch die Moderation des peripheren Blutflusses mittels Manschetten oder Bändern auftreten, insbesondere Populationen mit niedriger mechanischer Belastbarkeit von den positiven Effekten des Krafttrainings auf die Gehirngesundheit profitieren könnten [25].
Insgesamt deuten die Ergebnisse der in dieser Dissertation zusammengeführten und aufeinander aufbauenden Forschungsarbeiten auf das Vorhandensein von gemeinsamen neuronalen Korrelaten (z.B. frontaler Kortex) hin, die sowohl für die muskuläre Kraftleistungsfähigkeit als auch für höhere kognitive Prozesse eine wichtige Rolle spielen [26]. Betrachtet man die in der vorliegenden Dissertation gewonnenen Erkenntnisse im Verbund mit den bereits in der Literatur existieren-den empirischen Belegen, unterstützen sie die Sichtweise, dass eine relativ hohe muskuläre Kraftleistungsfähigkeit und deren Erhalt durch gezielte Krafttrainingsinterventionen über die Lebenspanne positive Effekte auf die (Gehirn-)Gesundheit haben können [27].
In Forschungsprogrammen werden zahlreiche Akteure mit unterschiedlichen Hintergründen und fachlichen Expertisen in Einzel- oder Verbundvorhaben vereint, die jedoch weitestgehend unabhängig voneinander durchgeführt werden. Vor dem Hintergrund, dass gesamtgesellschaftliche Herausforderungen wie die globale Erwärmung zunehmend disziplinübergreifende Lösungsansätze erfordern, sollten Vernetzungs- und Transferprozesse in Forschungsprogrammen stärker in den Fokus rücken. Mit der Implementierung einer Begleitforschung kann dieser Forderung Rechnung getragen werden. Begleitforschung unterscheidet sich in ihrer Herangehensweise und ihrer Zielvorstellung von den „üblichen“ Projekten und kann in unterschiedlichen theoretischen Reinformen auftreten. Verkürzt dargestellt agiert sie entweder (1) inhaltlich komplementär zu den jeweiligen Forschungsprojekten, (2) auf einer Metaebene mit Fokus auf die Prozesse im Forschungsprogramm oder (3) als integrierende, synthetisierende Instanz, für die die Vernetzung der Projekte im Forschungsprogramm sowie der Wissenstransfer von Bedeutung sind. Zwar sind diese Formen analytisch in theoretische Reinformen trennbar, in der Praxis ergibt sich in der Regel jedoch ein Mix aus allen dreien.
In diesem Zusammenhang schließt die vorliegende Dissertation als ergänzende Studie an bisherige Ansätze zum methodischen Handwerkszeug der Begleitforschung an und fokussiert auf folgende Fragestellungen: Auf welcher Basis kann die Vernetzung der Akteure in einem Forschungsprogramm durchgeführt werden, um diese effektiv zusammenzubringen? Welche weiteren methodischen Elemente sollten daran ansetzen, um einen Mehrwert zu generieren, der die Summe der Einzelergebnisse des Forschungsprogrammes übersteigt? Von welcher Art kann dann ein solcher Mehrwert sein und welche Rolle spielt dabei die Begleitforschung?
Das erste methodische Element bildet die Erhebung und Aufbereitung einer Ausgangsdatenbasis. Durch eine auf semantischer Analyse basierenden Verschlagwortung projektbezogener Texte lässt sich eine umfassende Datenbasis aus den Inhalten der Forschungsprojekte generieren. Die Schlagwörter werden dabei anhand eines kontrollierten Vokabulars in einem Schlagwortkatalog strukturiert. Parallel dazu werden sie wiederum den jeweiligen Projekten zugeordnet, wodurch diese thematische Merkmale erhalten. Um thematische Überschneidungen zwischen Forschungsprojekten sichtbar und interpretierbar zu machen, beinhaltet das zweite Element Ansätze zur Visualisierung. Dazu werden die Informationen in einen Netzwerkgraphen transferiert, der sowohl alle im Forschungsprogramm involvierten Projekte als auch die identifizierten Schlagwörter in Relation zueinander abbilden kann. So kann zum Beispiel sichtbar gemacht werden, welche Forschungsprojekte sich auf Basis ihrer Inhalte „näher“ sind als andere. Genau diese Information wird im dritten methodischen Element als Planungsgrundlage für unterschiedliche Veranstaltungsformate wie Arbeitstagungen oder Transferwerkstätten genutzt. Das vierte methodische Element umfasst die Synthesebildung. Diese gestaltet sich als Prozess über den gesamten Zeitraum der Zusammenarbeit zwischen Begleitforschung und den weiteren Forschungsprojekten hinweg, da in die Synthese unter anderem Zwischen-, Teil- und Endergebnisse der Projekte einfließen, genauso wie Inhalte aus den unterschiedlichen Veranstaltungen. Letztendlich ist dieses vierte Element auch das Mittel, um aus den integrierten und synthetisierten Informationen Handlungsempfehlungen für zukünftige Vorhaben abzuleiten.
Die Erarbeitung der methodischen Elemente erfolgte im laufenden Prozess des Begleitforschungsprojektes KlimAgrar, welches der vorliegenden Dissertation als Fallbeispiel dient und dessen Hintergründe in der Thematik Klimaschutz und Klimaanpassung in der Landwirtschaft im Text ausführlich erläutert werden.
Justice structures societies and social relations of any kind; its psychological integration provides a fundamental cornerstone for social, moral, and personality development. The trait justice sensitivity captures individual differences in responses toward perceived injustice (JS; Schmitt et al., 2005, 2010). JS has shown substantial relations to social and moral behavior in adult and adolescent samples; however, it was not yet investigated in middle childhood despite this being a sensitive phase for personality development. JS differentiates in underlying perspectives that are either more self- or other-oriented regarding injustice, with diverging outcome relations. The present research project investigated JS and its perspectives in children aged 6 to 12 years with a special focus on variables of social and moral development as potential correlates and outcomes in four cross-sectional studies. Study 1 started with a closer investigation of JS trait manifestation, measurement, and relations to important variables from the nomological network, such as temperamental dimensions, social-cognitive skills, and global pro- and antisocial behavior in a pilot sample of children from south Germany. Study 2 investigated relations between JS and distributive behavior following distributive principles in a large-scale data set of children from Berlin and Brandenburg. Study 3 explored the relations of JS with moral reasoning, moral emotions, and moral identity as important precursors of moral development in the same large-scale data set. Study 4 investigated punishment motivation to even out, prevent, or compensate norm transgressions in a subsample, whereby JS was considered as a potential predictor of different punishment motives. All studies indicated that a large-scale, economic measurement of JS is possible at least from middle childhood onward. JS showed relations to temperamental dimensions, social skills, global social behavior; distributive decisions and preferences for distributive principles; moral reasoning, emotions, and identity; as well as with punishment motivation; indicating that trait JS is highly relevant for social and moral development. The underlying self- or other-oriented perspectives showed diverging correlate and outcome relations mostly in line with theory and previous findings from adolescent and adult samples, but also provided new theoretical ideas on the construct and its differentiation. Findings point to an early internal justice motive underlying trait JS, but additional motivations underlying the JS perspectives. Caregivers, educators, and clinical psychologists should pay attention to children’s JS and toward promoting an adaptive justice-related personality development to foster children’s prosocial and moral development as well as their mental health.
In X-ray computed tomography (XCT), an X-ray beam of intensity I0 is transmitted through an object and its attenuated intensity I is measured when it exits the object. The attenuation of the beam depends on the attenuation coefficients along its path. The attenuation coefficients provide information about the structure and composition of the object and can be determined through mathematical operations that are referred to as reconstruction.
The standard reconstruction algorithms are based on the filtered backprojection (FBP) of the measured data. While these algorithms are fast and relatively simple, they do not always succeed in computing a precise reconstruction, especially from under-sampled data.
Alternatively, an image or volume can be reconstructed by solving a system of linear equations. Typically, the system of equations is too large to be solved but its solution can be approximated by iterative methods, such as the Simultaneous Iterative Reconstruction Technique (SIRT) and the Conjugate Gradient Least Squares (CGLS).
This dissertation focuses on the development of a novel iterative algorithm, the Direct Iterative Reconstruction of Computed Tomography Trajectories (DIRECTT). After its reconstruction principle is explained, its performance is assessed for real parallel- and cone-beam CT (including under-sampled) data and compared to that of other established algorithms. Finally, it is demonstrated how the shape of the measured object can be modelled into DIRECTT to achieve even better reconstruction results.
Background: Physical fitness is a key aspect of children’s ability to perform activities of daily living, engage in leisure activities, and is associated with important health characteristics. As such, it shows multi-directional associations with weight status as well as executive functions, and varies according to a variety of moderating factors, such as the child’s gender, age, geographical location, and socioeconomic conditions and context. The assessment and monitoring of children’s physical fitness has gained attention in recent decades, as has the question of how to promote physical fitness through the implementation of a variety of programs and interventions. However, these programs and interventions rarely focus on children with deficits in their physical fitness. Due to their deficits, these children are at the highest risk of suffering health impairments compared to their more average fit peers. In efforts to promote physical fitness, schools could offer promising and viable approaches to interventions, as they provide access to large youth populations while providing useful infrastructure. Evidence suggests that school-based physical fitness interventions, particularly those that include supplementary physical education, are useful for promoting and improving physical fitness in children with normal fitness. However, there is little evidence on whether these interventions have similar or even greater effects on children with deficits in their physical fitness. Furthermore, the question arises whether these measures help to sustainably improve the development/trajectories of physical fitness in these children.
The present thesis aims to elucidate the following four objectives: (1) to evaluate the effects of a 14 week intervention with 2 x 45 minutes per week additional remedial physical education on physical fitness and executive function in children with deficits in their physical fitness; (2) to assess moderating effects of body height and body mass on physical fitness components in children with physical fitness deficits; (3) to assess moderating effects of age and skeletal growth on physical fitness in children with physical fitness deficits; and (4) to analyse moderating effects of different physical fitness components on executive function in children with physical fitness deficits.
Methods: Using physical fitness data from the EMOTIKON study, 76 third graders with physical fitness deficits were identified in 11 schools in Brandenburg state that met the requirements for implementing a remedial physical education intervention (i.e., employing specially trained physical education teachers). The fitness intervention was implemented in a cross-over design and schools were randomly assigned to either an intervention-control or control-intervention group. The remedial physical education intervention consisted of a 14 week, 2 x 45 minutes per week remedial physical education curriculum supplemented by a physical exercise homework program. Assessments were conducted at the beginning and end of each intervention and control period, and further assessments were conducted at the beginning and end of each school year until the end of sixth grade. Physical fitness as the primary outcome was assessed using fitness tests implemented in the EMOTIKON study (i.e., lower body muscular strength (standing long jump), speed (20 m sprint), cardiorespiratory fitness (6 min run), agility (star run), upper body muscular strength (ball push test), and balance (one leg balance)). Executive functions as a secondary outcome were assessed using attention and psychomotor processing speed (digit symbol substitution test), mental flexibility and fine motor skills (trail making test), and inhibitory control (Simon task). Anthropometric measures such as body height, body mass, maturity offset, and body composition parameters, as well as socioeconomic information were recorded as potential moderators.
Results: (1) The evaluation of possible effects of the remedial physical education intervention on physical fitness and executive functions of children with deficits in their physical fitness did not reveal any detectable intervention-related improvements in physical fitness or executive functions. The implemented analysis strategies also showed moderating effects of body mass index (BMI) on performance in 6 min run, star run, and standing long jump, with children with a lower BMI performing better, moderating effects of proximity to Berlin on performance in the 6 min run and standing long jump, better performances being found in children living closer to Berlin, and overall gendered differences in executive function test performance, with boys performing better compared to girls. (2) Analysing moderating effects of body height and body mass on physical fitness performance, better overall physical fitness performance was found for taller children. For body mass, a negative effect was found on performance in the 6 min run (linear), standing long jump (linear), and 20 m sprint (quadratic), with better performance associated with lighter children, and a positive effect of body mass on performance in the ball push test, with heavier children performing better. In addition, the analysis revealed significant interactions between body height and body mass on performance in 6 min run and 20 m sprint, with higher body mass being associated with performance improvements in larger children, while higher body mass was associated with performance declines in smaller children. In addition, the analysis revealed overall age-related improvements in physical fitness and was able to show that children with better overall physical fitness also elicit greater age-related improvements. (3) In the analysis of moderating effects of age and maturity offset on physical fitness performances, two unrotated principal components of z-transformed age and maturity offset values were calculated (i.e., relative growth = (age + maturity offset)/2; growth delay = (age - maturity offset)) to avoid colinearity. Analysing these constructs revealed positive effects of relative growth on performances in star run, 20 m sprint, and standing long jump, with children of higher relative growth performing better. For growth delay, positive effects were found on performances in 6 min run and 20 m sprint, with children having larger growth delays showing better performances. Further, the model revealed gendered differences in 6 min run and 20 m sprint performances with girls performing better than boys. (4) Analysing the effects of physical fitness tests on executive function revealed a positive effect of star run and one leg balance performance and a negative effect of 6 min run performance on reaction speed in the Simon task. However, these effects were not detectable when individual differences were accounted for. Then these effects showed overall positive effects, with better performances being associated with faster reaction speeds. In addition, the analysis revealed a positive correlation between overall reaction speed and effects of the 6 min run, suggesting that children with greater effects of 6 min run had faster overall reaction speeds. Negative correlations were found between star run effects and age effects on Simon task reaction speed, meaning that children with larger star run effects had smaller age effects, and between 6 min run effects and star run effects on Simon task reaction speed, meaning that children with larger 6 min run effects tended to have smaller star run effects on Simon task reaction speed and vice versa.
Conclusions: (1) The lack of detectable intervention-related effects could have been caused by an insufficient intervention period, by the implementation of comprehensive and thus non- specific exercises, or by both. Accordingly, longer intervention periods and/or more specific exercises may have been more beneficial and could have led to detectable improvements in physical fitness and/or executive function. However, it remains unclear whether these interventions can benefit children with deficits in physical fitness, as it is possible that their deficits are not caused by a mere lack of exercise, but rather depend on the socioeconomic conditions of the children and their families and areas. Therefore, further research is needed to assess the moderation of physical fitness in children with physical fitness deficits and, in particular, the links between children’s environment and their physical fitness trajectories. (2) Findings from this work suggest that using BMI as a composite of body height and body mass may not be able to capture the variation associated with these parameters and their interactions. In particular, because of their multidirectional associations, further research would help elucidate how BMI and its subcomponents influence physical fitness and how they vary between children with and without physical fitness deficits. (3) The assessment of growth- related changes indicated negative effects associated with the growth spurt approaching age of peak height velocity, and furthermore showed significant differences in these effects between children. Thus, these effects and possible interindividual differences should be considered in the assessment of the development of physical fitness in children. (4) Furthermore, this work has shown that the associations between physical fitness and executive functions vary between children and may be moderated by children’s socioeconomic conditions and the structure of their daily activities. Further research is needed to explore these associations using approaches that account for individual variance.
We live in an era driven by fossil fuels. The prevailing climate change suggests that we have to significantly reduce greenhouse gas emissions. The only way forward is to use renewable energy sources. Among those, solar energy is a clean, affordable, and sustainable source of energy. It has the potential to satisfy the world’s energy demand in the future. However, there is a need to develop new materials that can make solar energy usable. Photovoltaics (PV) are devices that convert photon energy into electrical energy. The most commonly used solar cells are based on crystalline silicon. However, the fabrication process for silicon solar cells is technologically difficult and costly. Solar cells based on lead halide perovskites (PSCs) have emerged as a new candidate for PV applications since 2009. To date, PSCs have achieved 26% power-conversion-efficiency (PCE) for its single junction, and 33.7% PCE for tandem junction devices. However, there is still room for improvement in overall performance. The main challenge for the commercialization of this technology is the stability of the solar cells under operational conditions. Inorganic perovskite CsPbI3 has attracted researchers’ interest due to its stability at elevated temperatures, however, inorganic perovskites also have associated challenges, e.g. phase stability, larger voltage loss compared to their organic-inorganic hybrid counterparts, and interface energy misalignment. The most efficient inorganic perovskite solar cell is stable for up to a few hundred hours while the most stable device in the field of inorganic PSCs reported so far is at 17% PCE. This suggests the need for improvement of the interfaces for enhanced open circuit voltage (VOC), and optimization of the energy alignment at the interfaces. This dissertation presents the study on interfaces between the perovskite layer and hole transport layer (HTL) for stable CsPbI3 solar cells.
The first part of the thesis presents an investigation of the CsPbI3 film annealing environment and its subsequent effects on the perovskite/HTL interface dynamics. Thin films annealed in dry air were compared with thin films annealed in ambient air. Synchrotron-based hard X-ray spectroscopy (HAXPES) measurements reveal that annealing in ambient air does not have an adverse effect; instead, those samples undergo surface band bending. This surface band modification induces changes in interface charge dynamics and, consequently, an improvement in charge extraction at the interfaces. Further, transient surface photovoltage (tr-SPV) simulations show that air-annealed samples exhibit fewer trap states compared to samples annealed in dry air. Finally, by annealing the CsPbI3 films in ambient air, a PCE of 19.8% and Voc of 1.23 V were achieved for an n-i-p structured device.
Interface engineering has emerged as a strategy to extract the charge and optimize the energy alignment in perovskite solar cells (PSCs). An interface with fewer trap states and energy band levels closer to the selective contact helps to attain improved efficiencies in PSCs. The second part of the thesis presents a design for the CsPbI3/HTM interface. In this work, an interface between CsPbI3 perovskite and its hole selective contact N2,N2,N2′,N2′,N7,N7,N7′,N7′-octakis(4-methoxyphenyl)-9,9′-spirobi[9H-fluorene]-2,2′,7,7′-tetramine(Spiro-OMeTAD), realized by trioctylphosphine oxide (TOPO), a dipole molecule is introduced. On top of a perovskite film well-passivated by n-octyl ammonium Iodide (OAI), it created an upward surface band-bending at the interface byTOPO that optimizes energy level alignment and enhances the extraction of holes from the perovskite layer to the hole transport material. Consequently, a Voc of 1.2 V and high-power conversion efficiency (PCE) of over 19% were achieved for inorganic CsPbI3 perovskite solar cells. In addition, the work also sheds light on the interfacial charge-selectivity and the long-term stability of CsPbI3 perovskite solar cells.
The third part of the thesis extends the previous studies to polymeric poly(3-hexylthiophene-2,5-diyl) (P3HT) as HTL. The CsPbI3/P3HT interface is critical due to high non-radiative recombination. This work presents a CsPbI3/P3HT interface modified with a long-chain alkyl halide molecule, n-hexyl trimethyl ammonium bromide (HTAB). This molecule largely passivates the CsPbI3 perovskite surface and improves the charge extraction across the interface. Consequently, a Voc of over 1.00 V and 14.2% PCE were achieved for CsPbI3 with P3HT as HTM.
Overall the results presented in this dissertation introduce and discuss methods to design and study the interfaces in CsPbI3-based solar cells. This study can pave the way for novel interface designs between CsPbI3 and HTM for charge extraction, efficiency and stability.
The purpose of this thesis was to investigate the developmental dynamics between interest, motivation, and learning strategy use during physics learning. The target population was lower secondary school students from a developing country, given that there is hardly in research that studies the above domain-specific concepts in the context of developing countries. The aim was addressed in four parts.
The first part of the study was guided by three objectives: (a) to adapt and validate the Science Motivation Questionnaire (SMQ-II) for the Ugandan context; (b) to examine whether there are significant differences in motivation for learning Physics with respect to students’ gender; and (c) to establish the extent to which students’ interest predicts their motivation to learn Physics. Being a pilot study, the sample comprised 374 randomly selected students from five schools in central Uganda who responded to anonymous questionnaires that included scales from the SMQ-II and the Individual Interest Questionnaire. Data were analysed using confirmatory factor analyses, t-tests and structural equation modelling in SPSS-25 and Mplus-8. The five-factor model solution of the SMQ-II fitted adequately with the study data, with deletion of one item. The modified SMQ-II exhibited invariant factor loadings and intercepts (i.e., strong measurement invariance) when administered to boys and girls. Furthermore, on assessing whether motivation for learning Physics varied with gender, no significant differences were noted. On assessing the predictive effects of individual interest on students’ motivation, individual interest significantly predicted all motivational constructs, with stronger predictive strength on students’ self-efficacy and self-determination in learning Physics.
In the second part whilst using comprised 934 Grade 9 students from eight secondary schools in Uganda, Latent profile analysis (LPA) - a person-centred approach was used to investigate motivation patterns that exist in lower secondary school students during physics learning. A three-step approach to LPA was used to answer three research questions: RQ1, which profiles of secondary school students exist with regards to their motivation for Physics learning; RQ2, are there differences in students’ cognitive learning strategies in the identified profiles; and RQ3, does students’ gender, attitudes, and individual interest predict membership in these profiles? Six motivational profiles were identified: (i) low-quantity motivation profile (101 students; 10.8%); (ii) moderate-quantity motivation profile (246 students; 26.3%); (iii) high-quantity motivation profile (365 students; 39.1%); (iv) primarily intrinsically motivated profile (60 students,6.4%); (v) mostly extrinsically motivated profile (88 students, 9.4%); and (vi) grade-introjected profile (74 students, 7.9%). Low-quantity and grade introjected motivated students mostly used surface learning strategies whilst the high-quantity and primarily intrinsically motivated students used deep learning strategies. On examining the predictive effect of gender, individual interest, and students’ attitudes on the profile membership, unlike gender, individual interest and students’ attitudes towards Physics learning strongly predicted profile membership.
In the third part of the study, the occurrence of different secondary school learner profiles depending on their various combinations of cognitive and metacognitive learning strategy use, as well as their differences in perceived autonomy support, intrinsic motivation, and gender was examined. Data were collected from 576 9th grade student. Four learner profiles were identified: competent strategy user, struggling user, surface-level learner, and deep-level learner profiles. Gender differences were noted in students’ use of elaboration and organization strategies to learn Physics, in favour of girls. In terms of profile memberships, significant differences in gender, intrinsic motivation and perceived autonomy support were also noted. Girls were 2.4 - 2.7 times more likely than boys to be members of the competent strategy user and surface-level learner profiles. Additionally, higher levels of intrinsic motivation predicted an increased likelihood membership into the deep-level learner profile, whilst higher levels of perceived teacher autonomy predicted an increased likelihood membership into the competent strategy user profile as compared to other profiles.
Lastly, in the fourth part, changes in secondary school students’ physics motivation and cognitive learning strategies use during physics learning across time were examined. Two waves of data were collected from initially 954 9th students through to their 10th grade. A three-step approach to Latent transition analysis was used. Generally, students’ motivation decreased from 9th to 10th grade. Qualitative students’ motivation profiles indicated strong with-in person stability whilst the quantitative profiles were relatively less stable. Mostly, students moved from the high quantity motivation profile to the extrinsically motivated profiles. On the other hand, the cognitive learning strategies use profiles were moderately stable; with higher with-in person stability in the deep-level learner profile. None of the struggling users and surface-level learners transitioned into the deep-level learners’ profile. Additionally, students who perceived increased support for autonomy from their teachers had higher membership likelihood into the competent users’ profiles whilst those with an increase in individual interest score had higher membership likelihood into the deep-level learner profile.
During the Cenozoic, global cooling and uplift of the Tian Shan, Pamir, and Tibetan plateau modified atmospheric circulation and reduced moisture supply to Central Asia. These changes led to aridification in the region during the Neogene. Afterwards, Quaternary glaciations led to modification of the landscape and runoff.
In the Issyk-Kul basin of the Kyrgyz Tian Shan, the sedimentary sequences reflect the development of the adjacent ranges and local climatic conditions. In this work, I reconstruct the late Miocene – early Pleistocene depositional environment, climate, and lake development in the Issyk-Kul basin using facies analyses and stable δ18O and δ13C isotopic records from sedimentary sections dated by magnetostratigraphy and 26Al/10Be isochron burial dating. Also, I present 10Be-derived millennial-scale modern and paleo-denudation rates from across the Kyrgyz Tian Shan and long-term exhumation rates calculated from published thermochronology data. This allows me to examine spatial and temporal changes in surface processes in the Kyrgyz Tian Shan.
In the Issyk-Kul basin, the style of fluvial deposition changed at ca. 7 Ma, and aridification in the basin commenced concurrently, as shown by magnetostratigraphy and the δ18O and δ13C data. Lake formation commenced on the southern side of the basin at ca. 5 Ma, followed by a ca. 2 Ma local depositional hiatus. 26Al/10Be isochron burial dating and paleocurrent analysis show that the Kungey range to the north of the basin grew eastward, leading to a change from fluvial-alluvial deposits to proximal alluvial fan conglomerates at 5-4 Ma in the easternmost part of the basin. This transition occurred at 2.6-2.8 Ma on the southern side of the basin, synchronously with the intensification of the Northern Hemisphere glaciation. The paleo-denudation rates from 2.7-2.0 Ma are as low as long-term exhumation rates, and only the millennial-scale denudation rates record an acceleration of denudation.
This work concludes that the growth of the ranges to the north of the basin led to creation of the topographic barrier at ca. 7 Ma and a subsequent aridification in the Issyk-Kul basin. Increased subsidence and local tectonically-induced river system reorganization on the southern side of the basin enabled lake formation at ca. 5 Ma, while growth of the Kungey range blocked westward-draining rivers and led to sediment starvation and lake expansion. Denudational response of the Kyrgyz Tian Shan landscape is delayed due to aridity and only substantial cooling during the late Quaternary glacial cycles led to notable acceleration of denudation. Currently, increased glacier reduction and runoff controls a more rapid denudation of the northern slope of the Terskey range compared to other ranges of the Kyrgyz Tian Shan.
Gene expression data is analyzed to identify biomarkers, e.g. relevant genes, which serve for diagnostic, predictive, or prognostic use. Traditional approaches for biomarker detection select distinctive features from the data based exclusively on the signals therein, facing multiple shortcomings in regards to overfitting, biomarker robustness, and actual biological relevance. Prior knowledge approaches are expected to address these issues by incorporating prior biological knowledge, e.g. on gene-disease associations, into the actual analysis. However, prior knowledge approaches are currently not widely applied in practice because they are often use-case specific and seldom applicable in a different scope. This leads to a lack of comparability of prior knowledge approaches, which in turn makes it currently impossible to assess their effectiveness in a broader context.
Our work addresses the aforementioned issues with three contributions. Our first contribution provides formal definitions for both prior knowledge and the flexible integration thereof into the feature selection process. Central to these concepts is the automatic retrieval of prior knowledge from online knowledge bases, which allows for streamlining the retrieval process and agreeing on a uniform definition for prior knowledge. We subsequently describe novel and generalized prior knowledge approaches that are flexible regarding the used prior knowledge and applicable to varying use case domains. Our second contribution is the benchmarking platform Comprior. Comprior applies the aforementioned concepts in practice and allows for flexibly setting up comprehensive benchmarking studies for examining the performance of existing and novel prior knowledge approaches. It streamlines the retrieval of prior knowledge and allows for combining it with prior knowledge approaches. Comprior demonstrates the practical applicability of our concepts and further fosters the overall development and comparability of prior knowledge approaches. Our third contribution is a comprehensive case study on the effectiveness of prior knowledge approaches. For that, we used Comprior and tested a broad range of both traditional and prior knowledge approaches in combination with multiple knowledge bases on data sets from multiple disease domains. Ultimately, our case study constitutes a thorough assessment of a) the suitability of selected knowledge bases for integration, b) the impact of prior knowledge being applied at different integration levels, and c) the improvements in terms of classification performance, biological relevance, and overall robustness.
In summary, our contributions demonstrate that generalized concepts for prior knowledge and a streamlined retrieval process improve the applicability of prior knowledge approaches. Results from our case study show that the integration of prior knowledge positively affects biomarker results, particularly regarding their robustness. Our findings provide the first in-depth insights on the effectiveness of prior knowledge approaches and build a valuable foundation for future research.
Insight by de—sign
(2023)
The calculus of design is a diagrammatic approach towards the relationship between design and insight. The thesis I am evolving is that insights are not discovered, gained, explored, revealed, or mined, but are operatively de—signed. The de in design neglects the contingency of the space towards the sign. The — is the drawing of a distinction within the operation. Space collapses through the negativity of the sign; the command draws a distinction that neglects the space for the form's sake. The operation to de—sign is counterintuitively not the creation of signs, but their removal, the exclusion of possible sign propositions of space. De—sign is thus an act of exclusion; the possibilities of space are crossed into form.
Lithium-ion capacitors (LICs) are promising energy storage devices by asymmetrically combining anode with a high energy density close to lithium-ion batteries and cathode with a high power density and long-term stability close to supercapacitors. For the further improvement of LICs, the development of electrode materials with hierarchical porosity, nitrogen-rich lithiophilic sites, and good electrical conductivity is essential. Nitrogen-rich all-carbon composite hybrids are suitable for these conditions along with high stability and tunability, resulting in a breakthrough to achieve the high performance of LICs. In this thesis, two different all-carbon composites are suggested to unveil how the pore structure of lithiophilic composites influences the properties of LICs. Firstly, the composite with 0-dimensional zinc-templated carbon (ZTC) and hexaazatriphenylene-hexacarbonitrile (HAT) is examined how the pore structure is connected to Li-ion storage property as LIC electrode. As the pore structure of HAT/ZTC composite is easily tunable depending on the synthetic factor and ratio of each component, the results will allow deeper insights into Li-ion dynamics in different porosity, and low-cost synthesis by optimization of the HAT:ZTC ratio. Secondly, the composite with 1-dimensional nanoporous carbon fiber (ACF) and cost-effective melamine is proposed as a promising all-carbon hybrid for large-scale application. Since ACF has ultra-micropores, the numerical structure-property relationships will be calculated out not only from total pore volume but more specifically from ultra-micropore volume. From these results above, it would be possible to understand how hybrid all-carbon composites interact with lithium ions in nanoscale as well as how structural properties affect the energy storage performance. Based on this understanding derived from the simple materials modeling, it will provide a clue to design the practical hybrid materials for efficient electrodes in LICs.
Strings of words can correspond to more than one interpretation or underlying structure, which makes them ambiguous. Prosody can be used to resolve this structural ambiguity. This dissertation investigates the use of prosodic cues in the domains of fundamental frequency (f0) and duration to disambiguate between two interpretations of ambiguous structures when speakers addressed different interlocutors. The dissertation comprises of three production studies and one comprehension study.
Prosodic disambiguation was studied with a focus on German name sequences of three names (coordinates) in two conditions: without (Name1 and Name2 and Name3) and with internal grouping of the first two names ([Name1 and Name2] and Name3). The study of coordinates was complemented with production data of locally ambiguous sentences with a case-ambiguous first noun phrase.
Variability was studied in a controlled setting: Productions were elicited with a within-subject manipulation of context in a referential communication task in order to evoke prosodic adaptations to different conversational contexts. Context had five levels and involved interlocutors in three age groups (child, young adult, elderly adult) with German as L1 in the absence of background white noise, the young adult with background white noise, and a young adult without German as L1. Variability was explored at different levels: within a group of young individuals (intra-group level), within and between young individuals (intra-individual level and inter-individual level, respectively), and comparing between the group of young and a group of older speakers (inter-group level).
Our data replicate the use of the three prosodic cues (f0-movement, final lengthening, and pause) in productions of young adult speakers and extend their use to productions of older adult speakers. Both age groups distinguished consistently between the two coordinate conditions. Prosodic grouping in production was evident not only on the group-final Name2 but also at earlier stages in the utterance, on the group-internal Name1 (early cues). For some speakers, some listeners were able to decode these early cues effectively as they were able to reliably predict the upcoming structure after listening to Name1 only. Thus, prosodic grouping appears as a globally marked phenomenon building up along the utterance. The internal structure of coordinates was disambiguated irrespective of the conversational context. In our data, speakers only slightly modified the prosodic cues marking the disambiguation in the different contexts. Listeners were unable to identify to which interlocutor the sequence had been produced. We interpret this intra-individual consistency in the production of disambiguating prosodic cues as support for a strong link between prosody and syntax. The findings support models in favour of situational independence of disambiguating prosody. All speakers reliably marked the distinction between the grouping conditions with at least one of the three prosodic cues investigated and most of the speakers used at least two of these cues. Further, individual differences in prosodic grouping did not lead to difficulties in recovering the grouping in comprehension. Taken together, these findings support the existence of a phonological category of prosodic grouping.