Refine
Has Fulltext
- yes (159) (remove)
Year of publication
- 2022 (159) (remove)
Document Type
- Doctoral Thesis (159) (remove)
Is part of the Bibliography
- yes (159)
Keywords
- Klimawandel (5)
- climate change (5)
- Digitalisierung (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- modelling (3)
- Adipositas (2)
- Arbeitszufriedenheit (2)
- Bewegungsökologie (2)
- Bundeswehr (2)
Institute
- Institut für Biochemie und Biologie (27)
- Extern (26)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Wirtschaftswissenschaften (7)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (5)
The development of speaking competence is widely regarded as a central aspect of second language (L2) learning. It may be questioned, however, if the currently predominant ways of conceptualising the term fully satisfy the complexity of the construct: Although there is growing recognition that language primarily constitutes a tool for communication and participation in social life, as yet it is rare for conceptualisations of speaking competence to incorporate the ability to inter-act and co-construct meaning with co-participants. Accordingly, skills allowing for the successful accomplishment of interactional tasks (such as orderly speaker change, and resolving hearing and understanding trouble) also remain largely unrepresented in language teaching and assessment. As fostering the ability to successfully use the L2 within social interaction should arguably be a main objective of language teaching, it appears pertinent to broaden the construct of speaking competence by incorporating interactional competence (IC). Despite there being a growing research interest in the conceptualisation and development of (L2) IC, much of the materials and instruments required for its teaching and assessment, and thus for fostering a broader understanding of speaking competence in the L2 classroom, still await development. This book introduces an approach to the identification of candidate criterial features for the assessment of EFL learners’ L2 repair skills. Based on a corpus of video-recorded interaction between EFL learners, and following conversation-analytic and interactional-linguistic methodology as well as drawing on basic premises of research in the framework of Conversation Analysis for Second Language Acquisition, differences between (groups of) learners in terms of their L2 repair conduct are investigated through qualitative and inductive analyses. Candidate criterial features are derived from the analysis results. This book does not only contribute to the operationalisation of L2 IC (and of L2 repair skills in particular), but also lays groundwork for the construction of assessment scales and rubrics geared towards the evaluation of EFL learners’ L2 interactional skills.
Knowledge graphs are structured repositories of knowledge that store facts
about the general world or a particular domain in terms of entities and
their relationships. Owing to the heterogeneity of use cases that are served
by them, there arises a need for the automated construction of domain-
specific knowledge graphs from texts. While there have been many research
efforts towards open information extraction for automated knowledge graph
construction, these techniques do not perform well in domain-specific settings.
Furthermore, regardless of whether they are constructed automatically from
specific texts or based on real-world facts that are constantly evolving, all
knowledge graphs inherently suffer from incompleteness as well as errors in
the information they hold.
This thesis investigates the challenges encountered during knowledge graph
construction and proposes techniques for their curation (a.k.a. refinement)
including the correction of semantic ambiguities and the completion of missing
facts. Firstly, we leverage existing approaches for the automatic construction
of a knowledge graph in the art domain with open information extraction
techniques and analyse their limitations. In particular, we focus on the
challenging task of named entity recognition for artwork titles and show
empirical evidence of performance improvement with our proposed solution
for the generation of annotated training data.
Towards the curation of existing knowledge graphs, we identify the issue of
polysemous relations that represent different semantics based on the context.
Having concrete semantics for relations is important for downstream appli-
cations (e.g. question answering) that are supported by knowledge graphs.
Therefore, we define the novel task of finding fine-grained relation semantics
in knowledge graphs and propose FineGReS, a data-driven technique that
discovers potential sub-relations with fine-grained meaning from existing pol-
ysemous relations. We leverage knowledge representation learning methods
that generate low-dimensional vectors (or embeddings) for knowledge graphs
to capture their semantics and structure. The efficacy and utility of the
proposed technique are demonstrated by comparing it with several baselines
on the entity classification use case.
Further, we explore the semantic representations in knowledge graph embed-
ding models. In the past decade, these models have shown state-of-the-art
results for the task of link prediction in the context of knowledge graph comple-
tion. In view of the popularity and widespread application of the embedding
techniques not only for link prediction but also for different semantic tasks,
this thesis presents a critical analysis of the embeddings by quantitatively
measuring their semantic capabilities. We investigate and discuss the reasons
for the shortcomings of embeddings in terms of the characteristics of the
underlying knowledge graph datasets and the training techniques used by
popular models.
Following up on this, we propose ReasonKGE, a novel method for generating
semantically enriched knowledge graph embeddings by taking into account the
semantics of the facts that are encapsulated by an ontology accompanying the
knowledge graph. With a targeted, reasoning-based method for generating
negative samples during the training of the models, ReasonKGE is able to
not only enhance the link prediction performance, but also reduce the number
of semantically inconsistent predictions made by the resultant embeddings,
thus improving the quality of knowledge graphs.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
The Antarctic ice sheet is the largest freshwater reservoir worldwide. If it were to melt completely, global sea levels would rise by about 58 m. Calculation of projections of the Antarctic contribution to sea level rise under global warming conditions is an ongoing effort which
yields large ranges in predictions. Among the reasons for this are uncertainties related to the physics of ice sheet modeling. These
uncertainties include two processes that could lead to runaway ice retreat: the Marine Ice Sheet Instability (MISI), which causes rapid grounding line retreat on retrograde bedrock, and the Marine Ice Cliff Instability (MICI), in which tall ice cliffs become unstable and calve off, exposing even taller ice cliffs.
In my thesis, I investigated both marine instabilities (MISI and MICI) using the Parallel Ice Sheet Model (PISM), with a focus on MICI.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
The present dissertation conducts empirical research on the relationship between urban life and its economic costs, especially for the environment. On the one hand, existing gaps in research on the influence of population density on air quality are closed and, on the other hand, innovative policy measures in the transport sector are examined that are intended to make metropolitan areas more sustainable. The focus is on air pollution, congestion and traffic accidents, which are important for general welfare issues and represent significant cost factors for urban life. They affect a significant proportion of the world's population. While 55% of the world's people already lived in cities in 2018, this share is expected to reach approximately 68% by 2050.
The four self-contained chapters of this thesis can be divided into two sections: Chapters 2 and 3 provide new causal insights into the complex interplay between urban structures and air pollution. Chapters 4 and 5 then examine policy measures to promote non-motorised transport and their influence on air quality as well as congestion and traffic accidents.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
In this thesis, I present my contributions to the field of ultrafast molecular spectroscopy. Using the molecule 2-thiouracil as an example, I use ultrashort x-ray pulses from free- electron lasers to study the relaxation dynamics of gas-phase molecular samples. Taking advantage of the x-ray typical element- and site-selectivity, I investigate the charge flow and geometrical changes in the excited states of 2-thiouracil.
In order to understand the photoinduced dynamics of molecules, knowledge about the ground-state structure and the relaxation after photoexcitation is crucial. Therefore, a part of this thesis covers the electronic ground-state spectroscopy of mainly 2-thiouracil to provide the basis for the time-resolved experiments. Many of the previously published studies that focused on the gas-phase time-resolved dynamics of thionated uracils after UV excitation relied on information from solution phase spectroscopy to determine the excitation energies. This is not an optimal strategy as solvents alter the absorption spec- trum and, hence, there is no guarantee that liquid-phase spectra resemble the gas-phase spectra. Therefore, I measured the UV-absorption spectra of all three thionated uracils to provide a gas-phase reference and, in combination with calculations, we determined the excited states involved in the transitions.
In contrast to the UV absorption, the literature on the x-ray spectroscopy of thionated uracil is sparse. Thus, we measured static photoelectron, Auger-Meitner and x-ray absorption spectra on the sulfur L edge before or parallel to the time-resolved experiments we performed at FLASH (DESY, Hamburg). In addition, (so far unpublished) measurements were performed at the synchrotron SOLEIL (France) which have been included in this thesis and show the spin-orbit splitting of the S 2p photoline and its satellite which was not observed at the free-electron laser.
The relaxation of 2-thiouracil has been studied extensively in recent years with ultrafast visible and ultraviolet methods showing the ultrafast nature of the molecular process after photoexcitation. Ultrafast spectroscopy probing the core-level electrons provides a complementary approach to common optical ultrafast techniques. The method inherits its local sensitivity from the strongly localised core electrons. The core energies and core-valence transitions are strongly affected by local valence charge and geometry changes, and past studies have utilised this sensitivity to investigate the molecular process reflected by the ultrafast dynamics. We have built an apparatus that provides the requirements to perform time-resolved x-ray spectroscopy on molecules in the gas phase. With the apparatus, we performed UV-pump x-ray-probe electron spectroscopy on the S 2p edge of 2-thiouracil using the free-electron laser FLASH2. While the UV triggers the relaxation dynamics, the x-ray probes the single sulfur atom inside the molecule. I implemented photoline self-referencing for the photoelectron spectral analysis. This minimises the spectral jitter of the FEL, which is due to the underlying self-amplified spontaneous emission (SASE) process. With this approach, we were not only able to study dynamical changes in the binding energy of the electrons but also to detect an oscillatory behaviour in the shift of the observed photoline, which we associate with non-adiabatic dynamics involving several electronic states. Moreover, we were able to link the UV-induced shift in binding energy to the local charge flow at the sulfur which is directly connected to the electronic state. Furthermore, the analysis of the Auger-Meitner electrons shows that energy shifts observed at early stages of the photoinduced relaxation are related to the geometry change in the molecule. More specifically, the observed increase in kinetic energy of the Auger-Meitner electrons correlates with a previously predicted C=S bond stretch.
Why do exercises in collaborative governance often witness more impasse than advantage? This cumulative dissertation undertakes a micro-level analysis of collaborative governance to tackle this research puzzle. It situates micropolitics at the very center of analysis: a wide range of activities, interventions, and tactics used by actors – be they conveners, facilitators, or participants – to shape the collaborative exercise. It is by focusing on these daily minutiae, and on the consequences that they bring along, the study argues, that we can better understand why and how collaboration can become stuck or unproductive. To do so, the foundational part of this dissertation (Article 1) uses power as a sensitizing concept to investigate the micro-dynamics that shape collaboration. It develops an analytical approach to advance the study of collaborative governance at the empirical level under a power-sensitive and process-oriented perspective. The subsequent articles follow the dissertation's red thread of investigating the micropolitics of collaborative governance by showing facilitation artefacts' interrelatedness and contribution to the potential success or failure of collaborative arrangements (Article 2); and by examining the specialized knowledge, skills and practices mobilized when designing a collaborative process (Article 3). The work is based on an abductive research approach, tacking back and forth between empirical data and theory, and offers a repertoire of concepts – from analytical terms (designed and emerging interaction orders, flows of power, arenas for power), to facilitation practices (scripting, situating, and supervising) and types of knowledge (process expertise) – to illustrate and study the detailed and constant work (and rework) that surrounds collaborative arrangements. These concepts sharpen the way researchers can look at, observe, and understand collaborative processes at a micro level. The thesis thereby elucidates the subtleties of power, which may be overlooked if we focus only on outcomes rather than the processes that engender them, and supports efforts to identify potential sources of impasse.
The Arctic nearshore zone plays a key role in the carbon cycle. Organic-rich sediments get eroded off permafrost affected coastlines and can be directly transferred to the nearshore zone. Permafrost in the Arctic stores a high amount of organic matter and is vulnerable to thermo-erosion, which is expected to increase due to climate change. This will likely result in higher sediment loads in nearshore waters and has the potential to alter local ecosystems by limiting light transmission into the water column, thus limiting primary production to the top-most part of it, and increasing nutrient export from coastal erosion. Greater organic matter input could result in the release of greenhouse gases to the atmosphere. Climate change also acts upon the fluvial system, leading to greater discharge to the nearshore zone. It leads to decreasing sea-ice cover as well, which will both increase wave energy and lengthen the open-water season. Yet, knowledge on these processes and the resulting impact on the nearshore zone is scarce, because access to and instrument deployment in the nearshore zone is challenging.
Remote sensing can alleviate these issues in providing rapid data delivery in otherwise non-accessible areas. However, the waters in the Arctic nearshore zone are optically complex, with multiple influencing factors, such as organic rich suspended sediments, colored dissolved organic matter (cDOM), and phytoplankton. The goal of this dissertation was to use remotely sensed imagery to monitor processes related to turbidity caused by suspended sediments in the Arctic nearshore zone. In-situ measurements of water-leaving reflectance and surface water turbidity were used to calibrate a semi-empirical algorithm which relates turbidity from satellite imagery. Based on this algorithm and ancillary ocean and climate variables, the mechanisms underpinning nearshore turbidity in the Arctic were identified at a resolution not achieved before.
The calibration of the Arctic Nearshore Turbidity Algorithm (ANTA) was based on in-situ measurements from the coastal and inner-shelf waters around Herschel Island Qikiqtaruk (HIQ) in the western Canadian Arctic from the summer seasons 2018 and 2019. It performed better than existing algorithms, developed for global applications, in relating turbidity from remotely sensed imagery. These existing algorithms were lacking validation data from permafrost affected waters, and were thus not able to reflect the complexity of Arctic nearshore waters. The ANTA has a higher sensitivity towards the lowest turbidity values, which is an asset for identifying sediment pathways in the nearshore zone. Its transferability to areas beyond HIQ was successfully demonstrated using turbidity measurements matching satellite image recordings from Adventfjorden, Svalbard. The ANTA is a powerful tool that provides robust turbidity estimations in a variety of Arctic nearshore environments.
Drivers of nearshore turbidity in the Arctic were analyzed by combining ANTA results from the summer season 2019 from HIQ with ocean and climate variables obtained from the weather station at HIQ, the ERA5 reanalysis database, and the Mackenzie River discharge. ERA5 reanalysis data were obtained as domain averages over the Canadian Beaufort Shelf. Nearshore turbidity was linearly correlated to wind speed, significant wave height and wave period. Interestingly, nearshore turbidity was only correlated to wind speed at the shelf, but not to the in-situ measurements from the weather station at HIQ. This shows that nearshore turbidity, albeit being of limited spatial extent, gets influenced by the weather conditions multiple kilometers away, rather than in its direct vicinity. The large influence of wave energy on nearshore turbidity indicates that freshly eroded material off the coast is a major contributor to the nearshore sediment load. This contrasts results from the temperate and tropical oceans, where tides and currents are the major drivers of nearshore turbidity. The Mackenzie River discharge was not identified as a driver of nearshore turbidity in 2019, however, the analysis of 30 years of Landsat archive imagery from 1986 to 2016 suggests a direct link between the prevailing wind direction, which heavily influences the Mackenzie River plume extent, and nearshore turbidity around HIQ. This discrepancy could be caused by the abnormal discharge behavior of the Mackenzie River in 2019.
This dissertation has substantially advanced the understanding of suspended sediment processes in the Arctic nearshore zone and provided new monitoring tools for future studies. The presented results will help to understand the role of the Arctic nearshore zone in the carbon cycle under a changing climate.
The Pamir Frontal Thrust (PFT) located in the Trans Alai range in Central Asia is the principal active fault of the intracontinental India-Eurasia convergence zone and constitutes the northernmost boundary of the Pamir orogen at the NW edge of this collision zone. Frequent seismic activity and ongoing crustal shortening reflect the northward propagation of the Pamir into the intermontane Alai Valley. Quaternary deposits are being deformed and uplifted by the advancing thrust front of the Trans Alai range. The Alai Valley separates the Pamir range front from the Tien Shan mountains in the north; the Alai Valley is the vestige of a formerly contiguous basin that linked the Tadjik Depression in the west with the Tarim Basin in the east. GNSS measurements across the Central Pamir document a shortening rate of ~25 mm/yr, with a dramatic decrease of ~10-15 mm over a short distance across the northernmost Trans Alai range. This suggests that almost half of the shortening in the greater Pamir – Tien Shan collision zone is absorbed along the PFT. The short-term (geodetic) and long-term (geologic) shortening rates across the northern Pamir appear to be at odds with an apparent slip-rate discrepancy along the frontal fault system of the Pamir. Moreover, the present-day seismicity and historical records have not revealed great Mw > 7 earthquakes that might be expected with such a significant slip accommodation. In contrast, recent and historic earthquakes exhibit complex rupture patterns within and across seismotectonic segments bounding the Pamir mountain front, challenging our understanding of fault interaction and the seismogenic potential of this area, and leaving the relationships between seismicity and the geometry of the thrust front not well understood.
In this dissertation I employ different approaches to assess the seismogenic behavior along the PFT. Firstly, I provide paleoseismic data from five trenches across the central PFT segment (cPFT) and compute a segment-wide earthquake chronology over the past 16 kyr. This novel dataset provides important insights into the recurrence, magnitude, and rupture extent of past earthquakes along the cPFT. I interpret five, possibly six paleoearthquakes that have ruptured the Pamir mountain front since ∼7 ka and 16 ka, respectively. My results indicate that at least three major earthquakes ruptured the full-segment length and possibly crossed segment boundaries with a recurrence interval of ∼1.9 kyr and potential magnitudes of up to Mw 7.4. Importantly, I did not find evidence for great (i.e., Mw ≥8) earthquakes.
Secondly, I combine my paleoseimic results with morphometric analyses to establish a segment-wide distribution of the cumulative vertical separation along offset fluvial terraces and I model a long-term slip rate for the cPFT. My investigations reveal discrepancies between the extents of slip and rupture during apparent partial segment ruptures in the western half of the cPFT. Combined with significantly higher fault scarp offsets in this sector of the cPFT, the observations indicate a more mature fault section with a potential for future fault linkage. I estimate an average rate of horizontal motion for the cPFT of 4.1 ± 1.5 mm/yr during the past ∼5 kyr, which does not fully match the GNSS-derived present-day shortening rate of ∼10 mm/yr. This suggests a complex distribution of strain accumulation and potential slip partitioning between the cPFT and additional faults and folds within the Pamir that may be associated with a partially locked regional décollement.
The third part of the thesis provides new insights regarding the surface rupture of the 2008 Mw 6.6 Nura earthquake that ruptured along the eastern PFT sector. I explore this rupture in the context of its structural complexity by combining extensive field observations with high-resolution digital surface models. I provide a map of the rupture extent, net slip measurements, and updated regional geological observations. Based on this data I propose a tectonic model in this area associated with secondary flexural-slip faulting along steeply dipping bedding of folded Paleogene sedimentary strata that is related to deformation along a deeper blind thrust. Here, the strain release seems to be transferred from the PFT towards older inherited basement structures within the area of advanced Pamir-Tien Shan collision zone.
The extensive research of my dissertation results in a paleoseismic database of the past 16 ~kyr, which contributes to the understanding of the seismogenic behavior of the PFT, but also to that of segmented thrust-fault systems in active collisional settings. My observations underscore the importance of combining different methodological approaches in the geosciences, especially in structurally complex tectonic settings like the northern Pamir. Discrepancy between GNSS-derived present-day deformation rates and those from different geological archives in the central part, as well as the widespread distribution of the deformation due to earthquake triggered strain transfer in the eastern part reveals the complexity of this collision zone and calls for future studies involving multi-temporal and interdisciplinary approaches.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Diabetes is hallmarked by high blood glucose levels, which cause progressive generalised vascular damage, leading to microvascular and macrovascular complications. Diabetes-related complications cause severe and prolonged morbidity and are a major cause of mortality among people with diabetes. Despite increasing attention to risk factors of type 2 diabetes, existing evidence is scarce or inconclusive regarding vascular complications and research investigating both micro- and macrovascular complications is lacking. This thesis aims to contribute to current knowledge by identifying risk factors – mainly related to lifestyle – of vascular complications, addressing methodological limitations of previous literature and providing comparative data between micro- and macrovascular complications.
To address this overall aim, three specific objectives were set. The first was to investigate the effects of diabetes complication burden and lifestyle-related risk factors on the incidence of (further) complications. Studies suggest that diabetes complications are interrelated. However, they have been studied mainly independently of individuals’ complication burden. A five-state time-to-event model was constructed to examine the longitudinal patterns of micro- (kidney disease, neuropathy and retinopathy) and macrovascular complications (myocardial infarction and stroke) and their association with the occurrence of subsequent complications. Applying the same model, the effect of modifiable lifestyle factors, assessed alone and in combination with complication load, on the incidence of diabetes complications was studied. The selected lifestyle factors were body mass index (BMI), waist circumference, smoking status, physical activity, and intake of coffee, red meat, whole grains, and alcohol. Analyses were conducted in a cohort of 1199 participants with incident type 2 diabetes from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam, who were free of vascular complications at diabetes diagnosis. During a median follow-up time of 11.6 years, 96 cases of macrovascular complications (myocardial infarction and stroke) and 383 microvascular complications (kidney disease, neuropathy and retinopathy) were identified. In multivariable-adjusted models, the occurrence of a microvascular complication was associated with a higher incidence of further micro- (Hazard ratio [HR] 1.90; 95% Confidence interval [CI] 0.90, 3.98) and macrovascular complications (HR 4.72; 95% CI 1.25, 17.68), compared with persons without a complication burden. In addition, participants who developed a macrovascular event had a twofold higher risk of future microvascular complications (HR 2.26; 95% CI 1.05, 4.86). The models were adjusted for age, sex, state duration, education, lifestyle, glucose-lowering medication, and pre-existing conditions of hypertension and dyslipidaemia. Smoking was positively associated with macrovascular disease, while an inverse association was observed with higher coffee intake. Whole grain and alcohol intake were inversely associated with microvascular complications, and a U-shaped association was observed for red meat intake. BMI and waist circumference were positively associated with microvascular events. The associations between lifestyle factors and incidence of complications were not modified by concurrent complication burden, except for red meat intake and smoking status, where the associations were attenuated among individuals with a previous complication.
The second objective was to perform an in-depth investigation of the association between BMI and BMI change and risk of micro- and macrovascular complications. There is an ongoing debate on the association between obesity and risk of macrovascular and microvascular outcomes in type 2 diabetes, with studies suggesting a protective effect among people with overweight or obesity. These findings, however, might be limited due to suboptimal control for smoking, pre-existing chronic disease, or short-follow-up. After additional exclusion of persons with cancer history at diabetes onset, the associations between pre-diagnosis BMI and relative annual change between pre- and post-diagnosis BMI and incidence of complications were evaluated in multivariable-adjusted Cox models. The analyses were adjusted for age, sex, education, smoking status and duration, physical activity, alcohol consumption, adherence to the Mediterranean diet, and family history of diabetes and cardiovascular disease (CVD). Among 1083 EPIC-Potsdam participants, 85 macrovascular and 347 microvascular complications were identified during a median follow-up period of 10.8 years. Higher pre-diagnosis BMI was associated with an increased risk of total microvascular complications (HR per 5 kg/m2 1.21; 95% CI 1.07, 1.36), kidney disease (HR 1.39; 95% CI 1.21, 1.60) and neuropathy (HR 1.12; 95% CI 0.96, 1.31); but no association was observed for macrovascular complications (HR 1.05; 95% CI 0.81, 1.36). Effect modification was not evident by sex, smoking status, or age groups. In analyses according to BMI change categories, BMI loss of more than 1% indicated a decreased risk of total microvascular complications (HR 0.62; 95% CI 0.47, 0.80), kidney disease (HR 0.57; 95% CI 0.40, 0.81) and neuropathy (HR 0.73; 95% CI 0.52, 1.03), compared with participants with a stable BMI. No clear association was observed for macrovascular complications (HR 1.04; 95% CI 0.62, 1.74). The impact of BMI gain on diabetes-related vascular disease was less evident. Associations were consistent across strata of age, sex, pre-diagnosis BMI, or medication but appeared stronger among never-smokers than current or former smokers.
The last objective was to evaluate whether individuals with a high-risk profile for diabetes and cardiovascular disease (CVD) also have a greater risk of complications. Within the EPIC-Potsdam study, two accurate prognostic tools were developed, the German Diabetes Risk Score (GDRS) and the CVD Risk Score (CVDRS), which predict the 5-year type 2 diabetes risk and 10-year CVD risk, respectively. Both scores provide a non-clinical and clinical version. Components of the risk scores include age, sex, waist circumference, prevalence of hypertension, family history of diabetes or CVD, lifestyle factors, and clinical factors (only in clinical versions). The association of the risk scores with diabetes complications and their discriminatory performance for complications were assessed. In crude Cox models, both versions of GDRS and CVDRS were positively associated with macrovascular complications and total microvascular complications, kidney disease and neuropathy. Higher GDRS was also associated with an elevated risk of retinopathy. The discrimination of the scores (clinical and non-clinical) was poor for all complications, with the C-index ranging from 0.58 to 0.66 for macrovascular complications and from 0.60 to 0.62 for microvascular complications.
In conclusion, this work illustrates that the risk of complication development among individuals with type 2 diabetes is related to the existing complication load, and attention should be given to regular monitoring for future complications. It underlines the importance of weight management and adherence to healthy lifestyle behaviours, including high intake of whole grains, moderation in red meat and alcohol consumption and avoidance of smoking to prevent major diabetes-associated complications, regardless of complication burden. Risk scores predictive for type 2 diabetes and CVD were related to elevated risks of complications. By optimising several lifestyle and clinical factors, the risk score can be improved and may assist in lowering complication risk.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Die aktuelle COVID-19-Pandemie zeigt deutlich, wie sich Infektionskrankheiten weltweit verbreiten können. Neben Viruserkrankungen breiten sich auch multiresistente bakterielle Erreger weltweit aus. Dementsprechend besteht ein hoher Bedarf, durch frühzeitige Erkennung Erkrankte zu finden und Infektionswege zu unterbrechen.
Herkömmliche kulturelle Verfahren benötigen minimalinvasive bzw. invasive Proben und dauern für Screeningmaßnahmen zu lange. Deshalb werden schnelle, nichtinvasive Verfahren benötigt.
Im klassischen Griechenland verließen sich die Ärzte unter anderem auf ihren Geruchssinn, um Infektionen und andere Krankheiten zu differenzieren. Diese charakteristischen Gerüche sind flüchtige organische Substanzen (VOC), die im Rahmen des Metabolismus eines Organismus entstehen. Tiere, die einen besseren Geruchssinn haben, werden trainiert, bestimmte Krankheitserreger am Geruch zu unterscheiden. Allerdings ist der Einsatz von Tieren im klinischen Alltag nicht praktikabel. Es bietet sich an, auf technischem Weg diese VOCs zu analysieren.
Ein technisches Verfahren, diese VOCs zu unterscheiden, ist die Ionenmobilitätsspektrometrie gekoppelt mit einer multikapillaren Gaschromatographiesäule (MCC-IMS). Hier zeigte sich, dass es sich bei dem Verfahren um eine schnelle, sensitive und verlässliche Methode handelt.
Es ist bekannt, dass verschiedene Bakterien aufgrund des Metabolismus unterschiedliche VOCs und damit eigene spezifische Gerüche produzieren. Im ersten Schritt dieser Arbeit konnte gezeigt werden, dass die verschiedenen Bakterien in-vitro nach einer kurzen Inkubationszeitzeit von 90 Minuten anhand der VOCs differenziert werden können. Hier konnte analog zur Diagnose in biochemischen Testreihen eine hierarchische Klassifikation der Bakterien erfolgen.
Im Gegensatz zu Bakterien haben Viren keinen eigenen Stoffwechsel. Ob virusinfizierte Zellen andere VOCs als nicht-infizierte Zellen freisetzen, wurde an Zellkulturen überprüft. Hier konnte gezeigt werden, dass sich die Fingerprints der VOCs in Zellkulturen infizierter Zellen mit Respiratorischen Synzytial-Viren (RSV) von nicht-infizierten Zellen unterscheiden.
Virusinfektionen im intakten Organismus unterscheiden sich von den Zellkulturen dadurch, dass hier neben Veränderungen im Zellstoffwechsel auch durch Abwehrmechanismen VOCs freigesetzt werden können.
Zur Überprüfung, inwiefern sich Infektionen im intakten Organismus ebenfalls anhand VOCs unterscheiden lassen, wurde bei Patienten mit und ohne Nachweis einer Influenza A Infektion als auch bei Patienten mit Verdacht auf SARS-CoV-2 (Schweres-akutes-Atemwegssyndrom-Coronavirus Typ 2) Infektion die Atemluft untersucht. Sowohl Influenza-infizierte als auch SARS-CoV-2 infizierte Patienten konnten untereinander und von nicht-infizierten Patienten mittels MCC-IMS Analyse der Atemluft unterschieden werden.
Zusammenfassend erbringt die MCC-IMS ermutigende Resultate in der schnellen nichtinvasiven Erkennung von Infektionen sowohl in vitro als auch in vivo.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.
Ein schonender Umgang mit den Ressourcen und der Umwelt ist wesentlicher Bestandteil des modernen Bergbaus sowie der zukünftigen Versorgung unserer Gesellschaft mit essentiellen Rohstoffen. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung analytischer Strategien, die durch eine exakte und schnelle Vor-Ort-Analyse den technisch-praktischen Anforderungen des Bergbauprozesses gerecht werden und somit zu einer gezielten und nachhaltigen Nutzung von Rohstofflagerstätten beitragen. Die Analysen basieren auf den spektroskopischen Daten, die mittels der laserinduzierten Breakdownspektroskopie (LIBS) erhalten und mittels multivariater Datenanalyse ausgewertet werden. Die LIB-Spektroskopie ist eine vielversprechende Technik für diese Aufgabe. Ihre Attraktivität machen insbesondere die Möglichkeiten aus, Feldproben vor Ort ohne Probennahme oder ‑vorbereitung messen zu können, aber auch die Detektierbarkeit sämtlicher Elemente des Periodensystems und die Unabhängigkeit vom Aggregatzustand. In Kombination mit multivariater Datenanalyse kann eine schnelle Datenverarbeitung erfolgen, die Aussagen zur qualitativen Elementzusammensetzung der untersuchten Proben erlaubt. Mit dem Ziel die Verteilung der Elementgehalte in einer Lagerstätte zu ermitteln, werden in dieser Arbeit Kalibrierungs- und Quantifizierungsstrategien evaluiert. Für die Charakterisierung von Matrixeffekten und zur Klassifizierung von Mineralen werden explorative Datenanalysemethoden angewendet. Die spektroskopischen Untersuchungen erfolgen an Böden und Gesteinen sowie an Mineralen, die Kupfer oder Seltene Erdelemente beinhalten und aus verschiedenen Lagerstätten bzw. von unterschiedlichen Agrarflächen stammen.
Für die Entwicklung einer Kalibrierungsstrategie wurden sowohl synthetische als auch Feldproben von zwei verschiedenen Agrarflächen mittels LIBS analysiert. Anhand der Beispielanalyten Calcium, Eisen und Magnesium erfolgte die auf uni- und multivariaten Methoden beruhende Evaluierung verschiedener Kalibrierungsmethoden. Grundlagen der Quantifizierungsstrategien sind die multivariaten Analysemethoden der partiellen Regression der kleinsten Quadrate (PLSR, von engl.: partial least squares regression) und der Intervall PLSR (iPLSR, von engl.: interval PLSR), die das gesamte detektierte Spektrum oder Teilspektren in der Analyse berücksichtigen. Der Untersuchung liegen synthetische sowie Feldproben von Kupfermineralen zugrunde als auch solche die Seltene Erdelemente beinhalten. Die Proben stammen aus verschiedenen Lagerstätten und weisen unterschiedliche Begleitmatrices auf. Mittels der explorativen Datenanalyse erfolgte die Charakterisierung dieser Begleitmatrices. Die dafür angewendete Hauptkomponentenanalyse gruppiert Daten anhand von Unterschieden und Regelmäßigkeiten. Dies erlaubt Aussagen über Gemeinsamkeiten und Unterschiede der untersuchten Proben im Bezug auf ihre Herkunft, chemische Zusammensetzung oder lokal bedingte Ausprägungen. Abschließend erfolgte die Klassifizierung kupferhaltiger Minerale auf Basis der nicht-negativen Tensorfaktorisierung. Diese Methode wurde mit dem Ziel verwendet, unbekannte Proben aufgrund ihrer Eigenschaften in Klassen einzuteilen.
Die Verknüpfung von LIBS und multivariater Datenanalyse bietet die Möglichkeit durch eine Analyse vor Ort auf eine Probennahme und die entsprechende Laboranalytik weitestgehend zu verzichten und kann somit zum Umweltschutz sowie einer Schonung der natürlichen Ressourcen bei der Prospektion und Exploration von neuen Erzgängen und Lagerstätten beitragen. Die Verteilung von Elementgehalten der untersuchten Gebiete ermöglicht zudem einen gezielten Abbau und damit eine effiziente Nutzung der mineralischen Rohstoffe.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
Writing travel, writing life
(2022)
The book compares the texts of three Swiss authors: Ella Maillart, Annemarie Schwarzenbach and Nicolas Bouvier. The focus is on their trip from Genève to Kabul that Ella Maillart and Annemarie Schwarzenbach made together in 1939/1940 and Nicolas Bouvier 1953/1954 with the artist Thierry Vernet. The comparison shows the strong connection between the journey and life and between ars vivendi and travel literature.
This book also gives an overview of and organises the numerous terms, genres, and categories that already exist to describe various travel texts and proposes the new term travelling narration. The travelling narration looks at the text from a narratological perspective that distinguishes the author, narrator, and protagonist within the narration.
In the examination, ten motifs could be found to characterise the travelling narration: Culture, Crossing Borders, Freedom, Time and Space, the Aesthetics of Landscapes, Writing and Reading, the Self and/as the Other, Home, Religion and Spirituality as well as the Journey. The importance of each individual motif does not only apply in the 1930s or 1950s but also transmits important findings for living together today and in the future.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
In plant cells, subcellular transport of cargo proteins relies to a large extent on post-Golgi transport pathways, many of which are mediated by clathrin-coated vesicles (CCVs). Vesicle formation is facilitated by different factors like accessory proteins and adaptor protein complexes (APs), the latter serving as a bridge between cargo proteins and the coat protein clathrin. One type of accessory proteins is defined by a conserved EPSIN N-TERMINAL HOMOLOGY (ENTH) domain and interacts with APs and clathrin via motifs in the C-terminal part. In Arabidopsis thaliana, there are three closely related ENTH domain proteins (EPSIN1, 2 and 3) and one highly conserved but phylogenetically distant outlier, termed MODIFIED TRANSPORT TO THE VACUOLE1 (MTV1). In case of the trans-Golgi network (TGN) located MTV1, clathrin association and a role in vacuolar transport have been shown previously (Sauer et al. 2013). In contrast, for EPSIN1 and EPSIN2 limited functional and localization data were available; and EPSIN3 remained completely uncharacterized prior to this study (Song et al. 2006; Lee et al. 2007). The molecular details of ENTH domain proteins in plants are still unknown. In order to systematically characterize all four ENTH proteins in planta, we first investigated expression and subcellular localization by analysis of stable reporter lines under their endogenous promotors. Although all four genes are ubiquitously expressed, their subcellular distribution differs markedly. EPSIN1 and MTV1 are located at the TGN, whereas EPSIN2 and EPSIN3 are associated with the plasma membrane (PM) and the cell plate. To examine potential functional redundancy, we isolated knockout T-DNA mutant lines and created all higher order mutant combinations. The clearest evidence for functional redundancy was observed in the epsin1 mtv1 double mutant, which is a dwarf displaying overall growth reduction. These findings are in line with the TGN localization of both MTV1 and EPS1. In contrast, loss of EPSIN2 and EPSIN3 does not result in a growth phenotype compared to wild type, however, a triple knockout of EPSIN1, EPSIN2 and EPSIN3 shows partially sterile plants. We focused mainly on the epsin1 mtv1 double mutant and addressed the functional role of these two genes in clathrin-mediated vesicle transport by comprehensive molecular, biochemical, and genetic analyses. Our results demonstrate that EPSIN1 and MTV1 promote vacuolar transport and secretion of a subset of cargo. However, they do not seem to be involved in endocytosis and recycling. Importantly, employing high-resolution imaging, genetic and biochemical experiments probing the relationship of the AP complexes, we found that EPSIN1/AP1 and MTV1/AP4 define two spatially and molecularly distinct subdomains of the TGN. The AP4 complex is essential for MTV1 recruitment to the TGN, whereas EPSIN1 is independent of AP4 but presumably acts in an AP1-dependent framework. Our findings suggest that this ENTH/AP pairing preference is conserved between animals and plants.
Plants can be primed to survive the exposure to a severe heat stress (HS) by prior exposure to a mild HS. The information about the priming stimulus is maintained by the plant for several days. This maintenance of acquired thermotolerance, or HS memory, is genetically separable from the acquisition of thermotolerance itself and several specific regulatory factors have been identified in recent years.
On the molecular level, HS memory correlates with two types of transcriptional memory, type I and type II, that characterize a partially overlapping subset of HS-inducible genes. Type I transcriptional memory or sustained induction refers to the sustained transcriptional induction above non-stressed expression levels of a gene for a prolonged time period after the end of the stress exposure. Type II transcriptional memory refers to an altered transcriptional response of a gene after repeated exposure to a stress of similar duration and intensity. In particular, enhanced re-induction refers to a transcriptional pattern in which a gene is induced to a significantly higher degree after the second stress exposure than after the first.
This thesis describes the functional characterization of a novel positive transcriptional regulator of type I transcriptional memory, the heat shock transcription factor HSFA3, and compares it to HSFA2, a known positive regulator of type I and type II transcriptional memory. It investigates type I transcriptional memory and its dependence on HSFA2 and HSFA3 for the first time on a genome-wide level, and gives insight on the formation of heteromeric HSF complexes in response to HS. This thesis confirms the tight correlation between transcriptional memory and H3K4 hyper-methylation, reported here in a case study that aimed to reduce H3K4 hyper-methylation of the type II transcriptional memory gene APX2 by CRISPR/dCas9-mediated epigenome editing. Finally, this thesis gives insight into the requirements for a heat shock transcription factor to function as a positive regulator of transcriptional memory, both in terms of its expression profile and protein abundance after HS and the contribution of individual functional domains.
In summary, this thesis contributes to a more detailed understanding of the molecular processes underlying transcriptional memory and therefore HS memory, in Arabidopsis thaliana.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
Selbstwirksamkeitserwartungen von Lehramtsstudierenden im Kontext von schulpraktischen Erfahrungen
(2022)
Selbstwirksamkeitserwartungen spielen eine wichtige Rolle für das professionelle Verhalten von Lehrkräften im Unterricht (Tschannen-Moran et al., 1998) sowie für die Leistungen und das Verhalten der Schülerinnen und Schüler (Mojavezi & Tamiz, 2012). Selbstwirksamkeitserwartungen von Lehrkräften sind definiert als die Überzeugung von Lehrkräften, dass sie in der Lage sind, bestimmte Ziele in einer spezifischen Situation zu erreichen (Dellinger et al., 2008; Tschannen-Moran & Hoy, 2001). Aufgrund der bedeutenden Rolle der Lehrkräfte im Bildungssystem und in der Gesellschaft ist es wichtig, das Wohlbefinden, die Produktivität und die Wirksamkeit von Lehrkräften zu fördern (Kasalak & Dagyar, 2020). Empirische Befunde unterstreichen die positiven Effekte von Selbstwirksamkeitserwartungen bei Lehrkräften auf ihr Wohlbefinden (Perera & John, 2020) und auf das Lernen sowie die Leistungen der Schülerinnen und Schüler (Zee & Koomen, 2016). Dabei mangelt es jedoch an empirischer Forschung, die die Bedeutung von Selbstwirksamkeitserwartungen bei Lehramtsstudierende in der Lehrkräftebildung untersucht (Yurekli et al., 2020), insbesondere während schulpraktischen Ausbildungsphasen. Ausgehend von der Bedeutung eigener Unterrichtserfahrungen, die als mastery experience, d.h. als stärkste Quelle von Selbstwirksamkeit für Lehramtsstudierende, beschrieben wurden (Pfitzner-Eden, 2016b), werden in dieser Dissertation Praxiserfahrungen als Quelle von Selbstwirksamkeit von Lehramtsstudierenden und die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während der Lehrkräfteausbildung untersucht. Studie 1 konzentriert sich daher auf die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während kurzer praktischer Unterrichtserfahrungen im Vergleich zur Online-Lehre ohne Unterrichtserfahrung. Aufgrund inkonsistenter Befunde zu den wechselseitigen Beziehungen zwischen den Selbstwirksamkeitserwartungen von Lehrkräften und ihrem Unterrichtsverhalten (Holzberger et al., 2013; Lazarides et al., 2022) wurde in Studie 2 der Zusammenhang zwischen der Selbstwirksamkeit von Lehramtsstudierenden und ihrem Unterrichtsverhalten während des Lehramtsstudiums untersucht. Da Feedback als verbale Überzeugung (verbal persuasion) dienen kann und somit eine wichtige Quelle für Selbstwirksamkeitserwartungen ist, die das Gefühl der Kompetenz stärkt (Pfitzner-Eden, 2016b), fokussiert Studie 2 den Zusammenhang zwischen der Veränderung der Selbstwirksamkeit von Lehramtsstudierenden und der wahrgenommenen Qualität des Peer-Feedbacks im Kontext kurzer schulpraktischer Erfahrungen während des Lehramtsstudiums. Darüber hinaus ist es für die Untersuchung der Veränderung von Selbstwirksamkeit bei Lehramtsstudierenden wichtig, individuelle Persönlichkeitsaspekte und spezifische Bedingungen der Lernumgebung in der Lehrkräftebildung zu untersuchen (Bach, 2022). Ausgehend von der Annahme, dass die Unterstützung von Reflexionsprozessen in der Lehrkräftebildung (Menon & Azam, 2021) und der Einsatz innovativer Lernsettings wie VR-Videos (Nissim & Weissblueth, 2017) die Entwicklung von Selbstwirksamkeitserwartungen von Lehramtsstudierenden fördern, werden in Studie 3 und Studie 4 Reflexionsprozesse bei Lehramtsstudierenden in Bezug auf ihre eigenen Unterrichtserfahrungen bzw. stellvertretenden Unterrichtserfahrungen anderer untersucht. Vor dem Hintergrund inkonsistenter Befunde und fehlender empirischer Forschung zu den Zusammenhängen zwischen Selbstwirksamkeit von Lehramtsstudierenden und verschiedenen Faktoren, die das Lernumfeld oder persönliche Merkmale betreffen, sind weitere empirische Studien erforderlich, die verschiedene Quellen und Zusammenhänge der Selbstwirksamkeitserwartungen von Lehramtsstudierenden während des Lehramtsstudiums untersuchen. In diesem Zusammenhang wird in der vorliegenden Dissertation der Frage nachgegangen, welche individuellen Merkmale und Lernumgebungen die Selbstwirksamkeit von Lehramtsstudierenden – insbesondere während kurzer schulpraktischer Phasen im Lehramtsstudium fördern können. Darüber hinaus schließt die Dissertation mit der Diskussion der Ergebnisse aus den vier Teilstudien ab, indem Stärken und Schwächen jeder Studie gesamtheitlich in den Blick genommen werden. Abschließend werden Limitationen und Implikationen für die weitere Forschung und die Praxis diskutiert.
Polyglot programming allows developers to use multiple programming languages within the same software project. While it is common to use more than one language in certain programming domains, developers also apply polyglot programming for other purposes such as to re-use software written in other languages. Although established approaches to polyglot programming come with significant limitations, for example, in terms of performance and tool support, developers still use them to be able to combine languages.
Polyglot virtual machines (VMs) such as GraalVM provide a new level of polyglot programming, allowing languages to directly interact with each other. This reduces the amount of glue code needed to combine languages, results in better performance, and enables tools such as debuggers to work across languages. However, only a little research has focused on novel tools that are designed to support developers in building software with polyglot VMs. One reason is that tool-building is often an expensive activity, another one is that polyglot VMs are still a moving target as their use cases and requirements are not yet well understood.
In this thesis, we present an approach that builds on existing self-sustaining programming systems such as Squeak/Smalltalk to enable exploratory programming, a practice for exploring and gathering software requirements, and re-use their extensive tool-building capabilities in the context of polyglot VMs. Based on TruffleSqueak, our implementation for the GraalVM, we further present five case studies that demonstrate how our approach helps tool developers to design and build tools for polyglot programming. We further show that TruffleSqueak can also be used by application developers to build and evolve polyglot applications at run-time and by language and runtime developers to understand the dynamic behavior of GraalVM languages and internals. Since our platform allows all these developers to apply polyglot programming, it can further help to better understand the advantages, use cases, requirements, and challenges of polyglot VMs. Moreover, we demonstrate that our approach can also be applied to other polyglot VMs and that insights gained through it are transferable to other programming systems.
We conclude that our research on tools for polyglot programming is an important step toward making polyglot VMs more approachable for developers in practice. With good tool support, we believe polyglot VMs can make it much more common for developers to take advantage of multiple languages and their ecosystems when building software.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
Die Arbeit gibt einen Einblick in die Verständigungspraxen bei Stadtführungen mit (ehemaligen) Obdachlosen, die in ihrem Selbstverständnis auf die Herstellung von Verständnis, Toleranz und Anerkennung für von Obdachlosigkeit betroffene Personen zielen. Zunächst wird in den Diskurs des Slumtourismus eingeführt und, angesichts der Vielfalt der damit verbundenen Erscheinungsformen, Slumming als organisierte Begegnung mit sozialer Ungleichheit definiert. Die zentralen Diskurslinien und die darin eingewobenen moralischen Positionen werden nachvollzogen und im Rahmen der eigenommenen wissenssoziologischen Perspektive als Ausdruck einer per se polykontexturalen Praxis re-interpretiert. Slumming erscheint dann als eine organisierte Begegnung von Lebensformen, die sich in einer Weise fremd sind, als dass ein unmittelbares Verstehen unwahrscheinlich erscheint und genau aus diesem Grund auf der Basis von gängigen Interpretationen des Common Sense ausgehandelt werden muss. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie sich Teilnehmer und Stadtführer über die Erfahrung der Obdachlosigkeit praktisch verständigen und welcher Art das hierüber erzeugte Verständnis für die im öffentlichen Diskurs mit vielfältigen stigmatisierenden Zuschreibungen versehenen Obdachlosen ist. Dabei interessiert besonders, in Bezug auf welche Aspekte der Erfahrung von Obdachlosigkeit ein gemeinsames Verständnis möglich wird und an welchen Stellen dieses an Grenzen gerät. Dazu wurden die Gesprächsverläufe auf neun Stadtführungen mit (ehemaligen) obdachlosen Stadtführern unterschiedlicher Anbieter im deutschsprachigen Raum verschriftlicht und mit dem Verfahren der Dokumentarischen Methode ausgewertet. Die vergleichende Betrachtung der Verständigungspraxen eröffnet nicht zuletzt eine differenzierte Perspektive auf die in den Prozessen der Verständigung immer schon eingewobenen Anerkennungspraktiken. Mit Blick auf die moralische Debatte um organisierte Begegnungen mit sozialer Ungleichheit wird dadurch eine ethische Perspektive angeregt, in deren Zentrum Fragen zur Vermittlungsarbeit stehen.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Lehrkräftefortbildungen bieten in Deutschland im Rahmen der dritten Phase der Lehrkräftebildung eine zentrale Lerngelegenheit für die Kompetenzentwicklung der Lehr-kräfte (Avalos, 2011; Guskey & Yoon, 2009). In dieser Phase können Lehrkräfte aus einem Angebot an berufsbegleitenden Lerngelegenheiten wählen, die auf die Anpassung und Weiterentwicklung ihrer professionellen Kompetenzen abzielen. Im Rahmen dieser Professionalisierungsmaßnahmen haben Lehrkräfte Gelegenheit zur Reflexion und Weiterentwicklung ihrer Unterrichtspraxis. Deshalb sind Lehrkräftefortbildungen auch für die Entwicklung von Unterrichtsqualität und das Lernen der Schüler:innen bedeutsam (Lipowsky, 2014).
Ergebnisse der Nutzungsforschung zeigen jedoch, dass das Fortbildungsangebot nicht von allen Lehrkräften im vollen Umfang genutzt wird und sich Lehrkräfte in dem Nutzungsumfang dieser beruflichen Lerngelegenheiten unterscheiden (Hoffmann & Richter, 2016). Das hat zur Folge, dass das Wirkpotenzial des Fortbildungsangebots nicht voll ausgeschöpft werden kann. Um die Nutzung von Lehrkräftefortbildungen zu fördern, werden auf unterschiedlichen Ebenen verschiedene Steuerungsinstrumente von Akteuren eingesetzt. Die Frage nach der Steuerungsmöglichkeit im Rahmen der dritten Phase der Lehrkräftebildung ist bislang jedoch weitestgehend unbearbeitet geblieben.
Die vorliegende Arbeit knüpft an die bestehende Forschung zur Lehrkräftefortbildung an und nutzt die theoretische Perspektive der Educational Governance, um im Rahmen von vier Teilstudien der Frage nachzugehen, welche Instrumente und Potenziale der Steue-rung auf den unterschiedlichen Ebenen des Lehrkräftefortbildungssystems bestehen und wie diese durch die verschiedenen politischen und schulischen Akteure umgesetzt werden. Außerdem soll der Frage nachgegangen werden, wie wirksam die genutzten Steuerungsinstrumente im Hinblick auf die Nutzung von Lehrkräftefortbildungen sind. Die übergeordnete Fragestellung wird vor dem Hintergrund eines für das Lehrkräftefortbildungssystem abgelei-teten theoretischen Rahmenmodells in Form eines Mehrebenenmodells bearbeitet, welches als Grundlage für die theoretische Verortung der nachfolgenden empirischen Untersuchungen zur Fortbildungsnutzung und der Wirksamkeit verschiedener Steuerungsinstrumente dient.
Studie I nimmt vor diesem Hintergrund die Ebene der politischen Akteure in den Blick und geht der Frage nach, wie bedeutsam die gesetzliche Fortbildungspflicht für die Fortbildungsbeteiligung von Lehrkräften ist. Hierzu wurde untersucht, inwiefern Zusammenhänge zwischen der Fortbildungsteilnahme von Lehrkräften und der Zugehörigkeit zu Bundesländern mit und ohne konkreter Fortbildungsverpflichtung sowie zu Bundesländern mit und ohne Nachweispflicht absolvierter Fortbildungen bestehen. Dazu wurden Daten aus dem IQB-Ländervergleich 2011 und 2012 sowie dem IQB-Bildungstrend 2015 mittels logistischer und linearer Regressionsmodelle analysiert.
Studie II und Studie III widmen sich den Rahmenbedingungen für schulinterne Fortbildungen. Studie II befasst sich zunächst mit schulformspezifischen Unterschieden bei der Wahl der Fortbildungsthemen. Studie III untersucht das schulinterne Fortbildungsangebot hinsichtlich des Nutzungsumfangs und des Zusammenhangs zwischen Schulmerkmalen und der Nutzung unterschiedlicher Fortbildungsthemen. Darüber hinaus wird ein Vergleich zwi-schen den beiden Angebotsformaten hinsichtlich des jeweiligen Anteils an thematischen Fortbildungsveranstaltungen vorgenommen. Hierzu wurden Daten der Fortbildungsdatenbank des Landes Brandenburg ausgewertet.
Neben der Untersuchung der Fortbildungsteilnahme im Zusammenhang mit administrativen Vorgaben und der Nutzung des schulinternen Fortbildungsangebots auf Schulebene wurde zur Bearbeitung der übergeordneten Forschungsfrage der vorliegenden Arbeit in der Studie IV darüber hinaus eine Untersuchung des Einsatzes von Professionalisierungsmaßnahmen im Rahmen schulischer Personalentwicklung durchgeführt. Durch die qualitative Studie IV wurde ein vertiefender Einblick in die schulische Praxis ermöglicht, um die Kenntnisse aus den quantitativen Studien I bis III zu ergänzen. Im Rahmen einer qualitati-ven Interviewstudie wurde der Frage nachgegangen werden, wie Schulleitungen ausgezeichneter Schulen Personalentwicklung auffassen, welche Informationsquellen sie hierbei mit einbeziehen und welche Maßnahmen sie nutzen und in diesem Sinne Personalentwicklung als ein Instrument für Organisationsentwicklung einsetzen.
Im abschließenden Kapitel der vorliegenden Arbeit werden die zentralen Ergebnisse der durchgeführten Studien zusammenfassend diskutiert. Die Ergebnisse der Arbeit deuten insgesamt darauf hin, dass Akteure auf den jeweiligen Ebenen direkte und indirekete Steuerungsinstrumente mit dem Ziel einsetzen, die Nutzung des zur Verfügung stehenden Angebots zu erhöhen, allerdings erzielen sie mit den genutzten Instrumenten nicht die gewünschte Steuerungswirkung. Da sie weder mit beruflichen Sanktionen noch mit Anreizen verknüpft sind, fehlt es den bestehenden Steuerungsinstrumenten an Durchsetzungsmacht. Außerdem wird das Repertoire an möglichen Steuerungsinstrumenten von den beteiligten Akteuren nicht ausgeschöpft. Die Ergebnisse dieser Arbeit bieten somit die Grundlage für anknüpfende Forschungsarbeiten und geben Anreize für mögliche Implikationen in der Praxis des Fortbildungssystems und der Bildungspolitik.
Variation in traits permeates and affects all levels of biological organisation, from within individuals to between species. Yet, intraspecific trait variation (ITV) is not sufficiently represented in many ecological theories. Instead, species averages are often assumed. Especially ITV in behaviour has only recently attracted more attention as its pervasiveness and magnitude became evident. The surge in interest in ITV in behaviour was accompanied by a methodological and technological leap in the field of movement ecology. Many aspects of behaviour become visible via movement, allowing us to observe inter-individual differences in fundamental processes such as foraging, mate searching, predation or migration. ITV in movement behaviour may result from within-individual variability and consistent, repeatable among-individual differences. Yet, questions on why such among-individual differences occur in the first place and how they are integrated with life-history have remained open. Furthermore, consequences of ITV, especially of among-individual differences in movement behaviour, on populations and species communities are not sufficiently understood. In my thesis, I approach timely questions on the sources and consequences of ITV, particularly, in movement behaviour. After outlining fundamental concepts and the current state of knowledge, I approach these questions by using agent-based models to integrate concepts from behavioural and movement ecology and to develop novel perspectives.
Modern coexistence theory is a central pillar of community ecology, yet, insufficiently considers ITV in behaviour. In chapter 2, I model a competitive two-species system of ground-dwelling, central-place foragers to investigate the consequences of among-individual differences in movement behaviour on species coexistence. I show that the simulated among-individual differences, which matched with empirical data, reduce fitness differences betweem species, i.e. provide an equalising coexistence mechanism. Furthermore, I explain this result mechanistically and, thus, resolve an apparent ambiguity of the consequences of ITV on species coexistence described in previous studies.
In chapter 3, I turn the focus to sources of among-individual differences in movement behaviour and their potential integration with life-history. The pace-of-life syndrome (POLS) theory predicts that the covariation between among-individual differences in behaviour and life-history is mediated by a trade-off between early and late reproduction. This theory has generated attention but is also currently scrutinised. In chapter 3, I present a model which supports a recent conceptual development that suggests fluctuating density-dependent selection as a cause of the POLS. Yet, I also identified processes that may alter the association between movement behaviour and life-history across levels of biological organization.
ITV can buffer populations, i.e. reduce their extinction risk. For instance, among-individual differences can mediate portfolio effects or increase evolvability and, thereby, facilitate rapid evolution which can alleviate extinction risk. In chapter 4, I review ITV, environmental heterogeneity, and density-dependent processes which constitute local buffer mechanisms. In the light of habitat isolation, which reduces connectivity between populations, local buffer mechanisms may become more relevant compared to dispersal-related regional buffer mechanisms. In this chapter, I argue that capacities, latencies, and interactions of local buffer mechanisms should motivate more process-based and holistic integration of local buffer mechanisms in theoretical and empirical studies.
Recent perspectives propose to apply principles from movement and community ecology to study filamentous fungi. It is an open question whether and how the arrangement and geometry of microstructures select for certain movement traits, and, thus, facilitate coexistence-stabilising niche partitioning. As a coauthor of chapter 5, I developed an agent-based model of hyphal tips navigating in soil-like microstructures along a gradient of soil porosity. By measuring network properties, we identified changes in the optimal movement behaviours along the gradient. Our findings suggest that the soil architecture facilitates niche partitioning.
The core chapters are framed by a general introduction and discussion. In the general introduction, I outline fundamental concepts of movement ecology and describe theory and open questions on sources and consequences of ITV in movement behaviour. In the general discussion, I consolidate the findings of the core chapters and critically discuss their respective value and, if applicable, their impact. Furthermore, I emphasise promising avenues for further research.
Heimat
(2022)
Esta investigación propone un estudio transareal de las series autoficcionales del escritor austriaco Thomas Bernhard y el colombiano Fernando Vallejo, dos autores cuya obra se caracteriza por una dura crítica a sus países de origen, a sus Heimaten, pero también por un complejo arraigamiento. Los análisis interpretativos demuestran que en Die Autobiographie y El río del tiempo la Heimat se presenta como un constructo que abarca no solamente elementos dichosos, sino que presenta también elementos negativos, disolutivos, destructivos, con lo cual ambos autores de distancian de una concepción tradicional de Heimat como territorio necesariamente armónico al que el sujeto se siente positivamente vinculado. En cambio, ella se concibe como un conjunto disímil, frente al cual el sujeto se relaciona, necesariamente, de modo ambivalente y problemático. En ambos autores la narración literaria se configura como un acto en el que no simplemente se representa esa ambivalencia, sino en el que, sobre todo, se impugnan las formas de hostilidad que le confieren a la Heimat su carácter inhóspito. Para ello, ambos autores recurren a la implementación de dos recursos fundamentales: la mímesis y el movimiento. La investigación muestra de qué manera las obras estudiadas la Heimat se presenta como un espacio de continuos movimientos, intercambios e interacciones, en el que actúan mecanismos de opresión, pero también dispositivos de oposición, prácticas de apertura intersubjetiva y aspiraciones de integración comunitaria.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
Deep geological repositories represent a promising solution for the final disposal of nuclear waste. Due to its low permeability, high sorption capacity and self-sealing potential, Opalinus Clay (OPA) is considered a suitable host rock formation for the long-term storage of nuclear waste in Switzerland and Germany. However, the clay formation is characterized by compositional and structural variabilities including the occurrence of carbonate- and quartz-rich layers, pronounced bedding planes as well as tectonic elements such as pre-existing fault zones and fractures, suggesting heterogeneous rock mass properties.
Characterizing the heterogeneity of host rock properties is therefore essential for safety predictions of future repositories. This includes a detailed understanding of the mechanical and hydraulic properties, deformation behavior and the underlying deformation processes for an improved assessment of the sealing integrity and long-term safety of a deep repository in OPA. Against this background, this thesis presents the results of deformation experiments performed on intact and artificially fractured specimens of the quartz-rich, sandy and clay-rich, shaly facies of OPA. The experiments focus on the influence of mineralogical composition on the deformation behavior as well as the reactivation and sealing properties of pre-existing faults and fractures at different boundary conditions (e.g., pressure, temperature, strain rate).
The anisotropic mechanical properties of the sandy facies of OPA are presented in the first section, which were determined from triaxial deformation experiments using dried and resaturated samples loaded at 0°, 45° and 90° to the bedding plane orientation. A Paterson-type deformation apparatus was used that allowed to investigate how the deformation behavior is influenced by the variation of confining pressure (50 – 100 MPa), temperature (25 – 200 °C), and strain rate (1 × 10-3 – 5 × 10-6 s-1). Constant strain rate experiments revealed brittle to semi-brittle deformation behavior of the sandy facies at the applied conditions. Deformation behavior showed a strong dependence on confining pressure, degree of water saturation as well as bedding orientation, whereas the variation of temperature and strain rate had no significant effect on deformation. Furthermore, the sandy facies displays higher strength and stiffness compared to the clay-rich shaly facies deformed at similar conditions by Nüesch (1991). From the obtained results it can be concluded that cataclastic mechanisms dominate the short-term deformation behavior of dried samples from both facies up to elevated pressure (<200 MPa) and temperature (<200 °C) conditions.
The second part presents triaxial deformation tests that were performed to investigate how structural discontinuities affect the deformation behavior of OPA and how the reactivation of preexisting faults is influenced by mineral composition and confining pressure. To this end, dried cylindrical samples of the sandy and shaly facies of OPA were used, which contained a saw-cut fracture oriented at 30° to the long axis. After hydrostatic pre-compaction at 50 MPa, constant strain rate deformation tests were performed at confining pressures of 5, 20 or 35 MPa. With increasing confinement, a gradual transition from brittle, highly localized fault slip including a stress drop at fault reactivation to semi-brittle deformation behavior, characterized by increasing delocalization and non-linear strain hardening without dynamic fault reactivation, can be observed. Brittle localization was limited by the confining pressure at which the fault strength exceeded the matrix yield strength, above which strain partitioning between localized fault slip and distributed matrix deformation occurred. The sandy facies displayed a slightly higher friction coefficient (≈0.48) compared to the shaly facies (≈0.4). In addition, slide-hold-slide tests were conducted, revealing negative or negligible frictional strengthening, which suggests stable creep and long-term weakness of faults in both facies of OPA. The conducted experiments demonstrate that dilatant brittle fault reactivation in OPA may be favored at high overconsolidation ratios and shallow depths, increasing the risk of seismic hazard and the creation of fluid pathways.
The final section illustrates how the sealing capacity of fractures in OPA is affected by mineral composition. Triaxial flow-through experiments using Argon-gas were performed with dried samples from the sandy and shaly facies of OPA containing a roughened, artificial fracture. Slate, graywacke, quartzite, natural fault gouge, and granite samples were also tested to highlight the influence of normal stress, mineralogy and diagenesis on the sustainability of fracture transmissivity. With increasing normal stress, a non-linear decrease of fracture transmissivity can be observed that resulted in a permanent reduction of transmissivity after stress release. The transmissivity of rocks with a high portion of strong minerals (e.g., quartz) and high unconfined compressive strength was less sensitive to stress changes. In accordance with this, the sandy facies of OPA displayed a higher initial transmissivity that was less sensitive to stress changes compared to the shaly facies. However, transmissivity of rigid slate was less sensitive to stress changes than the sandy facies of OPA, although the slate is characterized by a higher phyllosilicate content. This demonstrates that in addition to mineral composition, other factors such as the degree of metamorphism, cementation and consolidation have to be considered when evaluating the sealing capacity of phyllosilicate-rich rocks.
The results of this thesis highlighted the role of confining pressure on the failure behavior of intact and artificially fractured OPA. Although the quartz-rich sandy facies may be considered as being more favorable for underground constructions due to its higher shear strength and stiffness than the shaly facies, the results indicate that when fractures develop in the sandy facies, they are more conductive and remain more permeable compared to fractures in the clay-dominated shaly facies at a given stress. The results may provide the basis for constitutive models to predict the integrity and evolution of a future repository. Clearly, the influence of composition and consolidation, e.g., by geological burial and uplift, on the mechanical sealing behavior of OPA highlights the need for a detailed site-specific material characterization for a future repository.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
Respiratorische Erkrankungen stellen zunehmend eine relevante globale Problematik dar. Die Erweiterung bzw. Modifizierung von Applikationswegen möglicher Arzneimittel für gezielte topische Anwendungen ist dabei von größter Bedeutung. Die Variation eines bekannten Applikationsweges durch unterschiedliche technologische Umsetzungen kann die Vielfalt der Anwendungsmöglichkeiten, aber auch die Patienten-Compliance erhöhen. Die einfache und flexible Verfahrensweise durch schnelle Verfügbarkeit und eine handliche Technologie sind heutzutage wichtige Eigenschaften im Entwicklungsprozess eines Produktes. Eine direkte topische Behandlung von Atemwegserkrankungen am Wirkort in Form einer inhalativen Applikation bietet dabei viele Vorteile gegenüber einer systemischen Therapie. Die medizinische Inhalation von Wirkstoffen über die Lunge ist jedoch eine komplexe Herausforderung. Inhalatoren gehören zu den erklärungsbedürftigen Applikationsformen, die zur Erhöhung der konsequenten Einhaltung der Verordnung so einfach, wie möglich gestaltet werden müssen. Parallel besitzen und nutzen weltweit annähernd 68 Millionen Menschen die Technologie eines inhalativen Applikators zur bewussten Schädigung ihrer Gesundheit in Form einer elektronischen Zigarette. Diese bekannte Anwendung bietet die potentielle Möglichkeit einer verfügbaren, kostengünstigen und qualitätsgeprüften Gesundheitsmaßnahme zur Kontrolle, Prävention und Heilung von Atemwegserkrankungen. Sie erzeugt ein Aerosol durch elektrothermische Erwärmung eines sogenannten Liquids, das durch Kapillarkräfte eines Trägermaterials an ein Heizelement gelangt und verdampft. Ihr Bekanntheitsgrad zeigt, dass eine beabsichtigte Wirkung in den Atemwegen eintritt. Diese Wirkung könnte jedoch auch auf potentielle pharmazeutische Einsatzgebiete übertragbar sein. Die Vorteile der pulmonalen Verabreichung sind dabei vielfältig. Im Vergleich zur peroralen Applikation gelangt der Wirkstoff gezielt zum Wirkort. Wenn eine systemische Applikation zu Arzneimittelkonzentrationen unterhalb der therapeutischen Wirksamkeit in der Lunge führt, könnte eine inhalative Darreichung bereits bei niedriger Dosierung die gewünschten höheren Konzentrationen am Wirkort hervorrufen. Aufgrund der großen Resorptionsfläche der Lunge sind eine höhere Bioverfügbarkeit und ein schnellerer Wirkungseintritt infolge des fehlenden First-Pass-Effektes möglich. Es kommt ebenfalls zu minimalen systemischen Nebenwirkungen. Die elektronische Zigarette erzeugt wie die medizinischen Inhalatoren lungengängige Partikel. Die atemzuggesteuerte Technik ermöglicht eine unkomplizierte und intuitive Anwendung. Der prinzipielle Aufbau besteht aus einer elektrisch beheizten Wendel und einem Akku. Die Heizwendel ist von einem sogenannten Liquid in einem Tank umgeben und erzeugt das Aerosol. Das Liquid beinhaltet eine Basismischung bestehend aus Propylenglycol, Glycerin und reinem Wasser in unterschiedlichen prozentualen Anteilen. Es besteht die Annahme, dass das Basisliquid auch mit pharmazeutischen Wirkstoffen für die pulmonale Applikation beladen werden kann. Aufgrund der thermischen Belastung durch die e-Zigarette müssen potentielle Wirkstoffe sowie das Vehikel eine thermische Stabilität aufweisen.
Die potentielle medizinische Anwendung der Technologie einer handelsüblichen e-Zigarette wurde anhand von drei Schwerpunkten an vier Wirkstoffen untersucht. Die drei ätherischen Öle Eucalyptusöl, Minzöl und Nelkenöl wurden aufgrund ihrer leichten Flüchtigkeit und der historischen pharmazeutischen Anwendung anhand von Inhalationen bei Erkältungssymptomen bzw. im zahnmedizinischen Bereich gewählt. Das eingesetzte Cannabinoid Cannabidiol (CBD) hat einen aktuellen Bezug zu dem pharmazeutischen Markt Deutschlands zur Legalisierung von cannabishaltigen Produkten und der medizinischen Forschung zum inhalativen Konsum. Es wurden relevante wirkstoffhaltige Flüssigformulierungen entwickelt und hinsichtlich ihrer Verdampfbarkeit zu Aerosolen bewertet. In den quantitativen und qualitativen chromatographischen Untersuchungen konnten spezifische Verdampfungsprofile der Wirkstoffe erfasst und bewertet werden. Dabei stieg die verdampfte Masse der Leitsubstanzen 1,8-Cineol (Eucalyptusöl), Menthol (Minzöl) und Eugenol (Nelkenöl) zwischen 33,6 µg und 156,2 µg pro Zug proportional zur Konzentration im Liquid im Bereich zwischen 0,5% und 1,5% bei einer Leistung von 20 Watt. Die Freisetzungsrate von Cannabidiol hingegen schien unabhängig von der Konzentration im Liquid im Mittelwert bei 13,3 µg pro Zug zu liegen. Dieses konnte an fünf CBD-haltigen Liquids im Konzentrationsbereich zwischen 31 µg/g und 5120 µg/g Liquid gezeigt werden. Außerdem konnte eine Steigerung der verdampften Massen mit Zunahme der Leistung der e-Zigarette festgestellt werden. Die Interaktion der Liquids bzw. Aerosole mit den Bestandteilen des Speichels sowie weiterer gastrointestinaler Flüssigkeiten wurde über die Anwendung von zugehörigen in vitro Modellen und Einsatz von Enzymaktivitäts-Assays geprüft. In den Untersuchungen wurden Änderungen von Enzymaktivitäten anhand des oralen Schlüsselenzyms α-Amylase sowie von Proteasen ermittelt. Damit sollte exemplarisch ein möglicher Einfluss auf physiologische bzw. metabolische Prozesse im humanen Organismus geprüft werden. Das Bedampfen von biologischen Suspensionen führte bei niedriger Leistung der e-Zigarette (20 Watt) zu keiner bzw. einer leichten Änderung der Enzymaktivität. Die Anwendung einer hohen Leistung (80 Watt) bewirkte tendenziell das Herabsetzen der Enzymaktivitäten. Die Erhöhung der Enzymaktivitäten könnte zu einem enzymatischen Abbau von Schleimstoffen wie Mucinen führen, was wiederum die effektive, mechanische Abwehr gegenüber bakteriellen Infektionen zur Folge hätte. Da eine Anwendung der Applikation insbesondere bei bakteriellen Atemwegserkrankungen denkbar wäre, folgten abschließend Untersuchungen der antibakteriellen Eigenschaften der Liquids bzw. Aerosole in vitro. Es wurden sechs klinisch relevante bakterielle Krankheitserreger ausgewählt, die nach zwei Charakteristika gruppiert werden können. Die drei multiresistenten Bakterien Pseudomonas aeruginosa, Klebsiella pneumoniae und Methicillin-resistenter Staphylococcus aureus können mithilfe von üblichen Therapien mit Antibiotika nicht abgetötet werden und haben vor allem eine nosokomiale Relevanz. Die zweite Gruppe weist Eigenschaften auf, die vordergründig assoziiert sind mit respiratorischen Erkrankungen. Die Bakterien Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae sind repräsentativ beteiligt an Atemwegserkrankungen mit diverser Symptomatik. Die Bakterienarten wurden mit den jeweiligen Liquids behandelt bzw. bedampft und deren grundlegende Dosis-Wirkungsbeziehung charakterisiert. Dabei konnte eine antibakterielle Aktivität der Formulierungen ermittelt werden, die durch Zugabe eines Wirkstoffes die bereits antibakterielle Wirkung der Bestandteile Glycerin und Propylenglycol verstärkte. Die hygroskopischen Eigenschaften dieser Substanzen sind vermutlich für eine Wirkung in aerosolierter Form verantwortlich. Sie entziehen die Feuchtigkeit aus der Luft und haben einen austrocknenden Effekt auf die Bakterien. Das Bedampfen der Bakterienarten Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae hatte einen antibakteriellen Effekt, der zeitlich abhängig von der Leistung der e-Zigarette war.
Die Ergebnisse der Untersuchungen führen zu dem Schluss, dass jeder Wirkstoff bzw. jede Substanzklasse individuell zu bewerten ist und somit Inhalator und Formulierung aufeinander abgestimmt werden müssen. Der Einsatz der e-Zigarette als Medizinprodukt zur Applikation von Arzneimitteln setzt stets Prüfungen nach Europäischem Arzneibuch voraus. Durch Modifizierungen könnte eine Dosierung gut kontrollierbar gemacht werden, aber auch die Partikelgrößenverteilung kann insoweit reguliert werden, dass die Wirkstoffe je nach Partikelgröße zu einem geeigneten Applikationsort wie Mund, Rachen oder Bronchien transportiert werden. Der Vergleich mit den Eigenschaften anderer medizinischer Inhalatoren führt zu dem Schluss, dass die Technologie der e-Zigarette durchaus eine gleichartige oder bessere Performance für thermisch stabile Wirkstoffe bieten könnte. Dieses fiktive Medizinprodukt könnte aus einer hersteller-unspezifisch produzierten, wieder aufladbaren Energiequelle mit Universalgewinde zum mehrfachen Gebrauch und einer hersteller- und wirkstoffspezifisch produzierten Einheit aus Verdampfer und Arzneimittel bestehen. Das Arzneimittel, ein medizinisches Liquid (Vehikel und Wirkstoff) kann in dem Tank des Verdampfers mit konstanten, nicht variablen Parametern patientenindividuell produziert werden. Inhalative Anwendungen werden perspektivisch wohl nicht zuletzt aufgrund der aktuellen COVID-19-Pandemie eine zunehmende Rolle spielen. Der Bedarf nach alternativen Therapieoptionen wird weiter ansteigen. Diese Arbeit liefert einen Beitrag zum Einsatz der Technologie der elektronischen Zigarette als electronic nicotin delivery system (ENDS) nach Modifizierung zu einem potentiellen pulmonalen Applikationssystem als electronic drug delivery system (EDDS) von inhalativen, thermisch stabilen Arzneimitteln in Form eines Medizinproduktes.
Nation, migration, narration
(2022)
En France et en Allemagne, l’immigration est devenue dans les dernières décennies une problématique centrale. C’est dans ce contexte qu’est apparu le rap. Celui-ci connaît une popularité énorme chez les populations issues de l’immigration. Pour autant, les rappeurs ne s’en confrontent pas moins à leur identité française ou allemande.
Le but de ce travail est d’expliquer cette apparente contradiction : comment des personnes issues de l’immigration, exprimant un mal-être face à un racisme qu’ils considèrent omniprésent, peuvent-elles se sentir pleinement françaises / allemandes ?
On a divisé le travail entre les chapitres suivants : Contexte de l'étude, méthodologie et théories (I) ; Analyse des différentes formes d’identité nationale au prisme du corpus (II) ; Analyse en trois étapes chronologiques du rapport à la société dans les textes des rappeurs (III-V) ; étude de cas de Kery James en France et Samy Deluxe en Allemagne (VI).
Fiber-based microfluidics has undergone many innovative developments in recent years, with exciting examples of portable, cost-effective and easy-to-use detection systems already being used in diagnostic and analytical applications. In water samples, Legionella are a serious risk as human pathogens. Infection occurs through inhalation of aerosols containing Legionella cells and can cause severe pneumonia and may even be fatal. In case of Legionella contamination of water-bearing systems or Legionella infection, it is essential to find the source of the contamination as quickly as possible to prevent further infections. In drinking, industrial and wastewater monitoring, the culture-based method is still the most commonly used technique to detect Legionella contamination. In order to improve the laboratory-dependent determination, the long analysis times of 10-14 days as well as the inaccuracy of the measured values in colony forming units (CFU), new innovative ideas are needed. In all areas of application, for example in public, commercial or private facilities, rapid and precise analysis is required, ideally on site.
In this PhD thesis, all necessary single steps for a rapid DNA-based detection of Legionella were developed and characterized on a fiber-based miniaturized platform. In the first step, a fast, simple and device-independent chemical lysis of the bacteria and extraction of genomic DNA was established. Subsequently, different materials were investigated with respect to their non-specific DNA retention. Glass fiber filters proved to be particularly suitable, as they allow recovery of the DNA sample from the fiber material in combination with dedicated buffers and exhibit low autofluorescence, which was important for fluorescence-based readout.
A fiber-based electrophoresis unit was developed to migrate different oligonucleotides within a fiber matrix by application of an electric field. A particular advantage over lateral flow assays is the targeted movement, even after the fiber is saturated with liquid. For this purpose, the entire process of fiber selection, fiber chip patterning, combination with printed electrodes, and testing of retention and migration of different DNA samples (single-stranded, double-stranded and genomic DNA) was performed. DNA could be pulled across the fiber chip in an electric field of 24 V/cm within 5 minutes, remained intact and could be used for subsequent detection assays e.g., polymerase chain reaction (PCR) or fluorescence in situ hybridization (FISH). Fiber electrophoresis could also be used to separate DNA from other components e.g., proteins or cell lysates or to pull DNA through multiple layers of the glass microfiber. In this way, different fragments experienced a moderate, size-dependent separation. Furthermore, this arrangement offers the possibility that different detection reactions could take place in different layers at a later time. Electric current and potential measurements were collected to investigate the local distribution of the sample during migration. While an increase in current signal at high concentrations indicated the presence of DNA samples, initial experiments with methylene blue stained DNA showed a temporal sequence of signals, indicating sample migration along the chip.
For the specific detection of a Legionella DNA, a FISH-based detection with a molecular beacon probe was tested on the glass microfiber. A specific region within the 16S rRNA gene of Legionella spp. served as a target. For this detection, suitable reaction conditions and a readout unit had to be set up first. Subsequently, the sensitivity of the probe was tested with the reverse complementary target sequence and the specificity with several DNA fragments that differed from the target sequence. Compared to other DNA sequences of similar length also found in Legionella pneumophila, only the target DNA was specifically detected on the glass microfiber. If a single base exchange is present or if two bases are changed, the probe can no longer distinguish between the DNA targets and non-targets. An analysis with this specificity can be achieved with other methods such as melting point determination, as was also briefly indicated here. The molecular beacon probe could be dried on the glass microfiber and stored at room temperature for more than three months, after which it was still capable of detecting the target sequence. Finally, the feasibility of fiber-based FISH detection for genomic Legionella DNA was tested. Without further processing, the probe was unable to detect its target sequence in the complex genomic DNA. However, after selecting and application of appropriate restriction enzymes, specific detection of Legionella DNA against other aquatic pathogens with similar fragment patterns as Acinetobacter haemolyticus was possible.
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
It is estimated that data scientists spend up to 80% of the time exploring, cleaning, and transforming their data. A major reason for that expenditure is the lack of knowledge about the used data, which are often from different sources and have heterogeneous structures. As a means to describe various properties of data, metadata can help data scientists understand and prepare their data, saving time for innovative and valuable data analytics. However, metadata do not always exist: some data file formats are not capable of storing them; metadata were deleted for privacy concerns; legacy data may have been produced by systems that were not designed to store and handle meta- data. As data are being produced at an unprecedentedly fast pace and stored in diverse formats, manually creating metadata is not only impractical but also error-prone, demanding automatic approaches for metadata detection.
In this thesis, we are focused on detecting metadata in CSV files – a type of plain-text file that, similar to spreadsheets, may contain different types of content at arbitrary positions. We propose a taxonomy of metadata in CSV files and specifically address the discovery of three different metadata: line and cell type, aggregations, and primary keys and foreign keys.
Data are organized in an ad-hoc manner in CSV files, and do not follow a fixed structure, which is assumed by common data processing tools. Detecting the structure of such files is a prerequisite of extracting information from them, which can be addressed by detecting the semantic type, such as header, data, derived, or footnote, of each line or each cell. We propose the supervised- learning approach Strudel to detect the type of lines and cells. CSV files may also include aggregations. An aggregation represents the arithmetic relationship between a numeric cell and a set of other numeric cells. Our proposed AggreCol algorithm is capable of detecting aggregations of five arithmetic functions in CSV files. Note that stylistic features, such as font style and cell background color, do not exist in CSV files. Our proposed algorithms address the respective problems by using only content, contextual, and computational features.
Storing a relational table is also a common usage of CSV files. Primary keys and foreign keys are important metadata for relational databases, which are usually not present for database instances dumped as plain-text files. We propose the HoPF algorithm to holistically detect both constraints in relational databases. Our approach is capable of distinguishing true primary and foreign keys from a great amount of spurious unique column combinations and inclusion dependencies, which can be detected by state-of-the-art data profiling algorithms.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
Climate change is one of the greatest challenges to humanity in this century, and most noticeable consequences are expected to be impacts on the water cycle – in particular the distribution and availability of water, which is fundamental for all life on Earth. In this context, it is essential to better understand where and when water is available and what processes influence variations in water storages. While estimates of the overall terrestrial water storage (TWS) variations are available from the GRACE satellites, these represent the vertically integrated signal over all water stored in ice, snow, soil moisture, groundwater and surface water bodies. Therefore, complementary observational data and hydrological models are still required to determine the partitioning of the measured signal among different water storages and to understand the underlying processes. However, the application of large-scale observational data is limited by their specific uncertainties and the incapacity to measure certain water fluxes and storages. Hydrological models, on the other hand, vary widely in their structure and process-representation, and rarely incorporate additional observational data to minimize uncertainties that arise from their simplified representation of the complex hydrologic cycle.
In this context, this thesis aims to contribute to improving the understanding of global water storage variability by combining simple hydrological models with a variety of complementary Earth observation-based data. To this end, a model-data integration approach is developed, in which the parameters of a parsimonious hydrological model are calibrated against several observational constraints, inducing GRACE TWS, simultaneously, while taking into account each data’s specific strengths and uncertainties. This approach is used to investigate 3 specific aspects that are relevant for modelling and understanding the composition of large-scale TWS variations.
The first study focusses on Northern latitudes, where snow and cold-region processes define the hydrological cycle. While the study confirms previous findings that seasonal dynamics of TWS are dominated by the cyclic accumulation and melt of snow, it reveals that inter-annual TWS variations on the contrary, are determined by variations in liquid water storages. Additionally, it is found to be important to consider the impact of compensatory effects of spatially heterogeneous hydrological variables when aggregating the contribution of different storage components over large areas. Hence, the determinants of TWS variations are scale-dependent and underlying driving mechanism cannot be simply transferred between spatial and temporal scales. These findings are supported by the second study for the global land areas beyond the Northern latitudes as well.
This second study further identifies the considerable impact of how vegetation is represented in hydrological models on the partitioning of TWS variations. Using spatio-temporal varying fields of Earth observation-based data to parameterize vegetation activity not only significantly improves model performance, but also reduces parameter equifinality and process uncertainties. Moreover, the representation of vegetation drastically changes the contribution of different water storages to overall TWS variability, emphasizing the key role of vegetation for water allocation, especially between sub-surface and delayed water storages. However, the study also identifies parameter equifinality regarding the decay of sub-surface and delayed water storages by either evapotranspiration or runoff, and thus emphasizes the need for further constraints hereof.
The third study focuses on the role of river water storage, in particular whether it is necessary to include computationally expensive river routing for model calibration and validation against the integrated GRACE TWS. The results suggest that river routing is not required for model calibration in such a global model-data integration approach, due to the larger influence other observational constraints, and the determinability of certain model parameters and associated processes are identified as issues of greater relevance. In contrast to model calibration, considering river water storage derived from routing schemes can already significantly improve modelled TWS compared to GRACE observations, and thus should be considered for model evaluation against GRACE data.
Beyond these specific findings that contribute to improved understanding and modelling of large-scale TWS variations, this thesis demonstrates the potential of combining simple modeling approaches with diverse Earth observational data to improve model simulations, overcome inconsistencies of different observational data sets, and identify areas that require further research. These findings encourage future efforts to take advantage of the increasing number of diverse global observational data.
Flares are magnetically driven explosions that occur in the atmospheres of all main sequence stars that possess an outer convection zone. Flaring activity is rooted in the magnetic dynamo that operates deep in the stellar interior, propagates through all layers of the atmosphere from the corona to the photosphere, and emits electromagnetic radiation from radio bands to X-ray. Eventually, this radiation, and associated eruptions of energetic particles, are ejected out into interplanetary space, where they impact planetary atmospheres, and dominate the space weather environments of young star-planet systems.
Thanks to the Kepler and the Transit Exoplanet Survey Satellite (TESS) missions, flare observations have become accessible for millions of stars and star-planet systems. The goal of this thesis is to use these flares as multifaceted messengers to understand stellar magnetism across the main sequence, investigate planetary habitability, and explore how close-in planets can affect the host star.
Using space based observations obtained by the Kepler/K2 mission, I found that flaring activity declines with stellar age, but this decline crucially depends on stellar mass and rotation. I calibrated the age of the stars in my sample using their membership in open clusters from zero age main sequence to solar age. This allowed me to reveal the rapid transition from an active, saturated flaring state to a more quiescent, inactive flaring behavior in early M dwarfs at about 600-800 Myr. This result is an important observational constraint on stellar activity evolution that I was able to de-bias using open clusters as an activity-independent age indicator.
The TESS mission quickly superseded Kepler and K2 as the main source of flares in low mass M dwarfs. Using TESS 2-minute cadence light curves, I developed a new technique for flare localization and discovered, against the commonly held belief, that flares do not occur uniformly across their stellar surface: In fast rotating fully convective stars, giant flares are preferably located at high latitudes. This bears implications for both our understanding of magnetic field emergence in these stars, and the impact on the exoplanet atmospheres: A planet that orbits in the equatorial plane of its host may be spared from the destructive effects of these poleward emitting flares.
AU Mic is an early M dwarf, and the most actively flaring planet host detected to date. Its innermost companion, AU Mic b is one of the most promising targets for a first observation of flaring star-planet interactions. In these interactions, the planet influences the star, as opposed to space weather, where the planet is always on the receiving side. The effect reflects the properties of the magnetosphere shared by planet and star, as well as the so far inaccessible magnetic properties of planets. In the about 50 days of TESS monitoring data of AU Mic, I searched for statistically robust signs of flaring interactions with AU Mic b as flares that occur in surplus of the star's intrinsic activity. I found the strongest yet still marginal signal in recurring excess flaring in phase with the orbital period of AU Mic b. If it reflects true signal, I estimate that extending the observing time by a factor of 2-3 will yield a statistically significant detection. Well within the reach of future TESS observations, this additional data may bring us closer to robustly detecting this effect than we have ever been.
This thesis demonstrates the immense scientific value of space based, long baseline flare monitoring, and the versatility of flares as a carrier of information about the magnetism of star-planet systems. Many discoveries still lay in wait in the vast archives that Kepler and TESS have produced over the years. Flares are intense spotlights into the magnetic structures in star-planet systems that are otherwise far below our resolution limits. The ongoing TESS mission, and soon PLATO, will further open the door to in-depth understanding of small and dynamic scale magnetic fields on low mass stars, and the space weather environment they effect.