Refine
Has Fulltext
- yes (159) (remove)
Year of publication
- 2022 (159) (remove)
Document Type
- Doctoral Thesis (159) (remove)
Is part of the Bibliography
- yes (159) (remove)
Keywords
- Klimawandel (5)
- climate change (5)
- Digitalisierung (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- modelling (3)
- Adipositas (2)
- Arbeitszufriedenheit (2)
- Bewegungsökologie (2)
- Bundeswehr (2)
Institute
- Institut für Biochemie und Biologie (27)
- Extern (26)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Wirtschaftswissenschaften (7)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (5)
The development of speaking competence is widely regarded as a central aspect of second language (L2) learning. It may be questioned, however, if the currently predominant ways of conceptualising the term fully satisfy the complexity of the construct: Although there is growing recognition that language primarily constitutes a tool for communication and participation in social life, as yet it is rare for conceptualisations of speaking competence to incorporate the ability to inter-act and co-construct meaning with co-participants. Accordingly, skills allowing for the successful accomplishment of interactional tasks (such as orderly speaker change, and resolving hearing and understanding trouble) also remain largely unrepresented in language teaching and assessment. As fostering the ability to successfully use the L2 within social interaction should arguably be a main objective of language teaching, it appears pertinent to broaden the construct of speaking competence by incorporating interactional competence (IC). Despite there being a growing research interest in the conceptualisation and development of (L2) IC, much of the materials and instruments required for its teaching and assessment, and thus for fostering a broader understanding of speaking competence in the L2 classroom, still await development. This book introduces an approach to the identification of candidate criterial features for the assessment of EFL learners’ L2 repair skills. Based on a corpus of video-recorded interaction between EFL learners, and following conversation-analytic and interactional-linguistic methodology as well as drawing on basic premises of research in the framework of Conversation Analysis for Second Language Acquisition, differences between (groups of) learners in terms of their L2 repair conduct are investigated through qualitative and inductive analyses. Candidate criterial features are derived from the analysis results. This book does not only contribute to the operationalisation of L2 IC (and of L2 repair skills in particular), but also lays groundwork for the construction of assessment scales and rubrics geared towards the evaluation of EFL learners’ L2 interactional skills.
Knowledge graphs are structured repositories of knowledge that store facts
about the general world or a particular domain in terms of entities and
their relationships. Owing to the heterogeneity of use cases that are served
by them, there arises a need for the automated construction of domain-
specific knowledge graphs from texts. While there have been many research
efforts towards open information extraction for automated knowledge graph
construction, these techniques do not perform well in domain-specific settings.
Furthermore, regardless of whether they are constructed automatically from
specific texts or based on real-world facts that are constantly evolving, all
knowledge graphs inherently suffer from incompleteness as well as errors in
the information they hold.
This thesis investigates the challenges encountered during knowledge graph
construction and proposes techniques for their curation (a.k.a. refinement)
including the correction of semantic ambiguities and the completion of missing
facts. Firstly, we leverage existing approaches for the automatic construction
of a knowledge graph in the art domain with open information extraction
techniques and analyse their limitations. In particular, we focus on the
challenging task of named entity recognition for artwork titles and show
empirical evidence of performance improvement with our proposed solution
for the generation of annotated training data.
Towards the curation of existing knowledge graphs, we identify the issue of
polysemous relations that represent different semantics based on the context.
Having concrete semantics for relations is important for downstream appli-
cations (e.g. question answering) that are supported by knowledge graphs.
Therefore, we define the novel task of finding fine-grained relation semantics
in knowledge graphs and propose FineGReS, a data-driven technique that
discovers potential sub-relations with fine-grained meaning from existing pol-
ysemous relations. We leverage knowledge representation learning methods
that generate low-dimensional vectors (or embeddings) for knowledge graphs
to capture their semantics and structure. The efficacy and utility of the
proposed technique are demonstrated by comparing it with several baselines
on the entity classification use case.
Further, we explore the semantic representations in knowledge graph embed-
ding models. In the past decade, these models have shown state-of-the-art
results for the task of link prediction in the context of knowledge graph comple-
tion. In view of the popularity and widespread application of the embedding
techniques not only for link prediction but also for different semantic tasks,
this thesis presents a critical analysis of the embeddings by quantitatively
measuring their semantic capabilities. We investigate and discuss the reasons
for the shortcomings of embeddings in terms of the characteristics of the
underlying knowledge graph datasets and the training techniques used by
popular models.
Following up on this, we propose ReasonKGE, a novel method for generating
semantically enriched knowledge graph embeddings by taking into account the
semantics of the facts that are encapsulated by an ontology accompanying the
knowledge graph. With a targeted, reasoning-based method for generating
negative samples during the training of the models, ReasonKGE is able to
not only enhance the link prediction performance, but also reduce the number
of semantically inconsistent predictions made by the resultant embeddings,
thus improving the quality of knowledge graphs.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
The Antarctic ice sheet is the largest freshwater reservoir worldwide. If it were to melt completely, global sea levels would rise by about 58 m. Calculation of projections of the Antarctic contribution to sea level rise under global warming conditions is an ongoing effort which
yields large ranges in predictions. Among the reasons for this are uncertainties related to the physics of ice sheet modeling. These
uncertainties include two processes that could lead to runaway ice retreat: the Marine Ice Sheet Instability (MISI), which causes rapid grounding line retreat on retrograde bedrock, and the Marine Ice Cliff Instability (MICI), in which tall ice cliffs become unstable and calve off, exposing even taller ice cliffs.
In my thesis, I investigated both marine instabilities (MISI and MICI) using the Parallel Ice Sheet Model (PISM), with a focus on MICI.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
The present dissertation conducts empirical research on the relationship between urban life and its economic costs, especially for the environment. On the one hand, existing gaps in research on the influence of population density on air quality are closed and, on the other hand, innovative policy measures in the transport sector are examined that are intended to make metropolitan areas more sustainable. The focus is on air pollution, congestion and traffic accidents, which are important for general welfare issues and represent significant cost factors for urban life. They affect a significant proportion of the world's population. While 55% of the world's people already lived in cities in 2018, this share is expected to reach approximately 68% by 2050.
The four self-contained chapters of this thesis can be divided into two sections: Chapters 2 and 3 provide new causal insights into the complex interplay between urban structures and air pollution. Chapters 4 and 5 then examine policy measures to promote non-motorised transport and their influence on air quality as well as congestion and traffic accidents.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
In this thesis, I present my contributions to the field of ultrafast molecular spectroscopy. Using the molecule 2-thiouracil as an example, I use ultrashort x-ray pulses from free- electron lasers to study the relaxation dynamics of gas-phase molecular samples. Taking advantage of the x-ray typical element- and site-selectivity, I investigate the charge flow and geometrical changes in the excited states of 2-thiouracil.
In order to understand the photoinduced dynamics of molecules, knowledge about the ground-state structure and the relaxation after photoexcitation is crucial. Therefore, a part of this thesis covers the electronic ground-state spectroscopy of mainly 2-thiouracil to provide the basis for the time-resolved experiments. Many of the previously published studies that focused on the gas-phase time-resolved dynamics of thionated uracils after UV excitation relied on information from solution phase spectroscopy to determine the excitation energies. This is not an optimal strategy as solvents alter the absorption spec- trum and, hence, there is no guarantee that liquid-phase spectra resemble the gas-phase spectra. Therefore, I measured the UV-absorption spectra of all three thionated uracils to provide a gas-phase reference and, in combination with calculations, we determined the excited states involved in the transitions.
In contrast to the UV absorption, the literature on the x-ray spectroscopy of thionated uracil is sparse. Thus, we measured static photoelectron, Auger-Meitner and x-ray absorption spectra on the sulfur L edge before or parallel to the time-resolved experiments we performed at FLASH (DESY, Hamburg). In addition, (so far unpublished) measurements were performed at the synchrotron SOLEIL (France) which have been included in this thesis and show the spin-orbit splitting of the S 2p photoline and its satellite which was not observed at the free-electron laser.
The relaxation of 2-thiouracil has been studied extensively in recent years with ultrafast visible and ultraviolet methods showing the ultrafast nature of the molecular process after photoexcitation. Ultrafast spectroscopy probing the core-level electrons provides a complementary approach to common optical ultrafast techniques. The method inherits its local sensitivity from the strongly localised core electrons. The core energies and core-valence transitions are strongly affected by local valence charge and geometry changes, and past studies have utilised this sensitivity to investigate the molecular process reflected by the ultrafast dynamics. We have built an apparatus that provides the requirements to perform time-resolved x-ray spectroscopy on molecules in the gas phase. With the apparatus, we performed UV-pump x-ray-probe electron spectroscopy on the S 2p edge of 2-thiouracil using the free-electron laser FLASH2. While the UV triggers the relaxation dynamics, the x-ray probes the single sulfur atom inside the molecule. I implemented photoline self-referencing for the photoelectron spectral analysis. This minimises the spectral jitter of the FEL, which is due to the underlying self-amplified spontaneous emission (SASE) process. With this approach, we were not only able to study dynamical changes in the binding energy of the electrons but also to detect an oscillatory behaviour in the shift of the observed photoline, which we associate with non-adiabatic dynamics involving several electronic states. Moreover, we were able to link the UV-induced shift in binding energy to the local charge flow at the sulfur which is directly connected to the electronic state. Furthermore, the analysis of the Auger-Meitner electrons shows that energy shifts observed at early stages of the photoinduced relaxation are related to the geometry change in the molecule. More specifically, the observed increase in kinetic energy of the Auger-Meitner electrons correlates with a previously predicted C=S bond stretch.
Why do exercises in collaborative governance often witness more impasse than advantage? This cumulative dissertation undertakes a micro-level analysis of collaborative governance to tackle this research puzzle. It situates micropolitics at the very center of analysis: a wide range of activities, interventions, and tactics used by actors – be they conveners, facilitators, or participants – to shape the collaborative exercise. It is by focusing on these daily minutiae, and on the consequences that they bring along, the study argues, that we can better understand why and how collaboration can become stuck or unproductive. To do so, the foundational part of this dissertation (Article 1) uses power as a sensitizing concept to investigate the micro-dynamics that shape collaboration. It develops an analytical approach to advance the study of collaborative governance at the empirical level under a power-sensitive and process-oriented perspective. The subsequent articles follow the dissertation's red thread of investigating the micropolitics of collaborative governance by showing facilitation artefacts' interrelatedness and contribution to the potential success or failure of collaborative arrangements (Article 2); and by examining the specialized knowledge, skills and practices mobilized when designing a collaborative process (Article 3). The work is based on an abductive research approach, tacking back and forth between empirical data and theory, and offers a repertoire of concepts – from analytical terms (designed and emerging interaction orders, flows of power, arenas for power), to facilitation practices (scripting, situating, and supervising) and types of knowledge (process expertise) – to illustrate and study the detailed and constant work (and rework) that surrounds collaborative arrangements. These concepts sharpen the way researchers can look at, observe, and understand collaborative processes at a micro level. The thesis thereby elucidates the subtleties of power, which may be overlooked if we focus only on outcomes rather than the processes that engender them, and supports efforts to identify potential sources of impasse.
The Arctic nearshore zone plays a key role in the carbon cycle. Organic-rich sediments get eroded off permafrost affected coastlines and can be directly transferred to the nearshore zone. Permafrost in the Arctic stores a high amount of organic matter and is vulnerable to thermo-erosion, which is expected to increase due to climate change. This will likely result in higher sediment loads in nearshore waters and has the potential to alter local ecosystems by limiting light transmission into the water column, thus limiting primary production to the top-most part of it, and increasing nutrient export from coastal erosion. Greater organic matter input could result in the release of greenhouse gases to the atmosphere. Climate change also acts upon the fluvial system, leading to greater discharge to the nearshore zone. It leads to decreasing sea-ice cover as well, which will both increase wave energy and lengthen the open-water season. Yet, knowledge on these processes and the resulting impact on the nearshore zone is scarce, because access to and instrument deployment in the nearshore zone is challenging.
Remote sensing can alleviate these issues in providing rapid data delivery in otherwise non-accessible areas. However, the waters in the Arctic nearshore zone are optically complex, with multiple influencing factors, such as organic rich suspended sediments, colored dissolved organic matter (cDOM), and phytoplankton. The goal of this dissertation was to use remotely sensed imagery to monitor processes related to turbidity caused by suspended sediments in the Arctic nearshore zone. In-situ measurements of water-leaving reflectance and surface water turbidity were used to calibrate a semi-empirical algorithm which relates turbidity from satellite imagery. Based on this algorithm and ancillary ocean and climate variables, the mechanisms underpinning nearshore turbidity in the Arctic were identified at a resolution not achieved before.
The calibration of the Arctic Nearshore Turbidity Algorithm (ANTA) was based on in-situ measurements from the coastal and inner-shelf waters around Herschel Island Qikiqtaruk (HIQ) in the western Canadian Arctic from the summer seasons 2018 and 2019. It performed better than existing algorithms, developed for global applications, in relating turbidity from remotely sensed imagery. These existing algorithms were lacking validation data from permafrost affected waters, and were thus not able to reflect the complexity of Arctic nearshore waters. The ANTA has a higher sensitivity towards the lowest turbidity values, which is an asset for identifying sediment pathways in the nearshore zone. Its transferability to areas beyond HIQ was successfully demonstrated using turbidity measurements matching satellite image recordings from Adventfjorden, Svalbard. The ANTA is a powerful tool that provides robust turbidity estimations in a variety of Arctic nearshore environments.
Drivers of nearshore turbidity in the Arctic were analyzed by combining ANTA results from the summer season 2019 from HIQ with ocean and climate variables obtained from the weather station at HIQ, the ERA5 reanalysis database, and the Mackenzie River discharge. ERA5 reanalysis data were obtained as domain averages over the Canadian Beaufort Shelf. Nearshore turbidity was linearly correlated to wind speed, significant wave height and wave period. Interestingly, nearshore turbidity was only correlated to wind speed at the shelf, but not to the in-situ measurements from the weather station at HIQ. This shows that nearshore turbidity, albeit being of limited spatial extent, gets influenced by the weather conditions multiple kilometers away, rather than in its direct vicinity. The large influence of wave energy on nearshore turbidity indicates that freshly eroded material off the coast is a major contributor to the nearshore sediment load. This contrasts results from the temperate and tropical oceans, where tides and currents are the major drivers of nearshore turbidity. The Mackenzie River discharge was not identified as a driver of nearshore turbidity in 2019, however, the analysis of 30 years of Landsat archive imagery from 1986 to 2016 suggests a direct link between the prevailing wind direction, which heavily influences the Mackenzie River plume extent, and nearshore turbidity around HIQ. This discrepancy could be caused by the abnormal discharge behavior of the Mackenzie River in 2019.
This dissertation has substantially advanced the understanding of suspended sediment processes in the Arctic nearshore zone and provided new monitoring tools for future studies. The presented results will help to understand the role of the Arctic nearshore zone in the carbon cycle under a changing climate.
The Pamir Frontal Thrust (PFT) located in the Trans Alai range in Central Asia is the principal active fault of the intracontinental India-Eurasia convergence zone and constitutes the northernmost boundary of the Pamir orogen at the NW edge of this collision zone. Frequent seismic activity and ongoing crustal shortening reflect the northward propagation of the Pamir into the intermontane Alai Valley. Quaternary deposits are being deformed and uplifted by the advancing thrust front of the Trans Alai range. The Alai Valley separates the Pamir range front from the Tien Shan mountains in the north; the Alai Valley is the vestige of a formerly contiguous basin that linked the Tadjik Depression in the west with the Tarim Basin in the east. GNSS measurements across the Central Pamir document a shortening rate of ~25 mm/yr, with a dramatic decrease of ~10-15 mm over a short distance across the northernmost Trans Alai range. This suggests that almost half of the shortening in the greater Pamir – Tien Shan collision zone is absorbed along the PFT. The short-term (geodetic) and long-term (geologic) shortening rates across the northern Pamir appear to be at odds with an apparent slip-rate discrepancy along the frontal fault system of the Pamir. Moreover, the present-day seismicity and historical records have not revealed great Mw > 7 earthquakes that might be expected with such a significant slip accommodation. In contrast, recent and historic earthquakes exhibit complex rupture patterns within and across seismotectonic segments bounding the Pamir mountain front, challenging our understanding of fault interaction and the seismogenic potential of this area, and leaving the relationships between seismicity and the geometry of the thrust front not well understood.
In this dissertation I employ different approaches to assess the seismogenic behavior along the PFT. Firstly, I provide paleoseismic data from five trenches across the central PFT segment (cPFT) and compute a segment-wide earthquake chronology over the past 16 kyr. This novel dataset provides important insights into the recurrence, magnitude, and rupture extent of past earthquakes along the cPFT. I interpret five, possibly six paleoearthquakes that have ruptured the Pamir mountain front since ∼7 ka and 16 ka, respectively. My results indicate that at least three major earthquakes ruptured the full-segment length and possibly crossed segment boundaries with a recurrence interval of ∼1.9 kyr and potential magnitudes of up to Mw 7.4. Importantly, I did not find evidence for great (i.e., Mw ≥8) earthquakes.
Secondly, I combine my paleoseimic results with morphometric analyses to establish a segment-wide distribution of the cumulative vertical separation along offset fluvial terraces and I model a long-term slip rate for the cPFT. My investigations reveal discrepancies between the extents of slip and rupture during apparent partial segment ruptures in the western half of the cPFT. Combined with significantly higher fault scarp offsets in this sector of the cPFT, the observations indicate a more mature fault section with a potential for future fault linkage. I estimate an average rate of horizontal motion for the cPFT of 4.1 ± 1.5 mm/yr during the past ∼5 kyr, which does not fully match the GNSS-derived present-day shortening rate of ∼10 mm/yr. This suggests a complex distribution of strain accumulation and potential slip partitioning between the cPFT and additional faults and folds within the Pamir that may be associated with a partially locked regional décollement.
The third part of the thesis provides new insights regarding the surface rupture of the 2008 Mw 6.6 Nura earthquake that ruptured along the eastern PFT sector. I explore this rupture in the context of its structural complexity by combining extensive field observations with high-resolution digital surface models. I provide a map of the rupture extent, net slip measurements, and updated regional geological observations. Based on this data I propose a tectonic model in this area associated with secondary flexural-slip faulting along steeply dipping bedding of folded Paleogene sedimentary strata that is related to deformation along a deeper blind thrust. Here, the strain release seems to be transferred from the PFT towards older inherited basement structures within the area of advanced Pamir-Tien Shan collision zone.
The extensive research of my dissertation results in a paleoseismic database of the past 16 ~kyr, which contributes to the understanding of the seismogenic behavior of the PFT, but also to that of segmented thrust-fault systems in active collisional settings. My observations underscore the importance of combining different methodological approaches in the geosciences, especially in structurally complex tectonic settings like the northern Pamir. Discrepancy between GNSS-derived present-day deformation rates and those from different geological archives in the central part, as well as the widespread distribution of the deformation due to earthquake triggered strain transfer in the eastern part reveals the complexity of this collision zone and calls for future studies involving multi-temporal and interdisciplinary approaches.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Diabetes is hallmarked by high blood glucose levels, which cause progressive generalised vascular damage, leading to microvascular and macrovascular complications. Diabetes-related complications cause severe and prolonged morbidity and are a major cause of mortality among people with diabetes. Despite increasing attention to risk factors of type 2 diabetes, existing evidence is scarce or inconclusive regarding vascular complications and research investigating both micro- and macrovascular complications is lacking. This thesis aims to contribute to current knowledge by identifying risk factors – mainly related to lifestyle – of vascular complications, addressing methodological limitations of previous literature and providing comparative data between micro- and macrovascular complications.
To address this overall aim, three specific objectives were set. The first was to investigate the effects of diabetes complication burden and lifestyle-related risk factors on the incidence of (further) complications. Studies suggest that diabetes complications are interrelated. However, they have been studied mainly independently of individuals’ complication burden. A five-state time-to-event model was constructed to examine the longitudinal patterns of micro- (kidney disease, neuropathy and retinopathy) and macrovascular complications (myocardial infarction and stroke) and their association with the occurrence of subsequent complications. Applying the same model, the effect of modifiable lifestyle factors, assessed alone and in combination with complication load, on the incidence of diabetes complications was studied. The selected lifestyle factors were body mass index (BMI), waist circumference, smoking status, physical activity, and intake of coffee, red meat, whole grains, and alcohol. Analyses were conducted in a cohort of 1199 participants with incident type 2 diabetes from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam, who were free of vascular complications at diabetes diagnosis. During a median follow-up time of 11.6 years, 96 cases of macrovascular complications (myocardial infarction and stroke) and 383 microvascular complications (kidney disease, neuropathy and retinopathy) were identified. In multivariable-adjusted models, the occurrence of a microvascular complication was associated with a higher incidence of further micro- (Hazard ratio [HR] 1.90; 95% Confidence interval [CI] 0.90, 3.98) and macrovascular complications (HR 4.72; 95% CI 1.25, 17.68), compared with persons without a complication burden. In addition, participants who developed a macrovascular event had a twofold higher risk of future microvascular complications (HR 2.26; 95% CI 1.05, 4.86). The models were adjusted for age, sex, state duration, education, lifestyle, glucose-lowering medication, and pre-existing conditions of hypertension and dyslipidaemia. Smoking was positively associated with macrovascular disease, while an inverse association was observed with higher coffee intake. Whole grain and alcohol intake were inversely associated with microvascular complications, and a U-shaped association was observed for red meat intake. BMI and waist circumference were positively associated with microvascular events. The associations between lifestyle factors and incidence of complications were not modified by concurrent complication burden, except for red meat intake and smoking status, where the associations were attenuated among individuals with a previous complication.
The second objective was to perform an in-depth investigation of the association between BMI and BMI change and risk of micro- and macrovascular complications. There is an ongoing debate on the association between obesity and risk of macrovascular and microvascular outcomes in type 2 diabetes, with studies suggesting a protective effect among people with overweight or obesity. These findings, however, might be limited due to suboptimal control for smoking, pre-existing chronic disease, or short-follow-up. After additional exclusion of persons with cancer history at diabetes onset, the associations between pre-diagnosis BMI and relative annual change between pre- and post-diagnosis BMI and incidence of complications were evaluated in multivariable-adjusted Cox models. The analyses were adjusted for age, sex, education, smoking status and duration, physical activity, alcohol consumption, adherence to the Mediterranean diet, and family history of diabetes and cardiovascular disease (CVD). Among 1083 EPIC-Potsdam participants, 85 macrovascular and 347 microvascular complications were identified during a median follow-up period of 10.8 years. Higher pre-diagnosis BMI was associated with an increased risk of total microvascular complications (HR per 5 kg/m2 1.21; 95% CI 1.07, 1.36), kidney disease (HR 1.39; 95% CI 1.21, 1.60) and neuropathy (HR 1.12; 95% CI 0.96, 1.31); but no association was observed for macrovascular complications (HR 1.05; 95% CI 0.81, 1.36). Effect modification was not evident by sex, smoking status, or age groups. In analyses according to BMI change categories, BMI loss of more than 1% indicated a decreased risk of total microvascular complications (HR 0.62; 95% CI 0.47, 0.80), kidney disease (HR 0.57; 95% CI 0.40, 0.81) and neuropathy (HR 0.73; 95% CI 0.52, 1.03), compared with participants with a stable BMI. No clear association was observed for macrovascular complications (HR 1.04; 95% CI 0.62, 1.74). The impact of BMI gain on diabetes-related vascular disease was less evident. Associations were consistent across strata of age, sex, pre-diagnosis BMI, or medication but appeared stronger among never-smokers than current or former smokers.
The last objective was to evaluate whether individuals with a high-risk profile for diabetes and cardiovascular disease (CVD) also have a greater risk of complications. Within the EPIC-Potsdam study, two accurate prognostic tools were developed, the German Diabetes Risk Score (GDRS) and the CVD Risk Score (CVDRS), which predict the 5-year type 2 diabetes risk and 10-year CVD risk, respectively. Both scores provide a non-clinical and clinical version. Components of the risk scores include age, sex, waist circumference, prevalence of hypertension, family history of diabetes or CVD, lifestyle factors, and clinical factors (only in clinical versions). The association of the risk scores with diabetes complications and their discriminatory performance for complications were assessed. In crude Cox models, both versions of GDRS and CVDRS were positively associated with macrovascular complications and total microvascular complications, kidney disease and neuropathy. Higher GDRS was also associated with an elevated risk of retinopathy. The discrimination of the scores (clinical and non-clinical) was poor for all complications, with the C-index ranging from 0.58 to 0.66 for macrovascular complications and from 0.60 to 0.62 for microvascular complications.
In conclusion, this work illustrates that the risk of complication development among individuals with type 2 diabetes is related to the existing complication load, and attention should be given to regular monitoring for future complications. It underlines the importance of weight management and adherence to healthy lifestyle behaviours, including high intake of whole grains, moderation in red meat and alcohol consumption and avoidance of smoking to prevent major diabetes-associated complications, regardless of complication burden. Risk scores predictive for type 2 diabetes and CVD were related to elevated risks of complications. By optimising several lifestyle and clinical factors, the risk score can be improved and may assist in lowering complication risk.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Die aktuelle COVID-19-Pandemie zeigt deutlich, wie sich Infektionskrankheiten weltweit verbreiten können. Neben Viruserkrankungen breiten sich auch multiresistente bakterielle Erreger weltweit aus. Dementsprechend besteht ein hoher Bedarf, durch frühzeitige Erkennung Erkrankte zu finden und Infektionswege zu unterbrechen.
Herkömmliche kulturelle Verfahren benötigen minimalinvasive bzw. invasive Proben und dauern für Screeningmaßnahmen zu lange. Deshalb werden schnelle, nichtinvasive Verfahren benötigt.
Im klassischen Griechenland verließen sich die Ärzte unter anderem auf ihren Geruchssinn, um Infektionen und andere Krankheiten zu differenzieren. Diese charakteristischen Gerüche sind flüchtige organische Substanzen (VOC), die im Rahmen des Metabolismus eines Organismus entstehen. Tiere, die einen besseren Geruchssinn haben, werden trainiert, bestimmte Krankheitserreger am Geruch zu unterscheiden. Allerdings ist der Einsatz von Tieren im klinischen Alltag nicht praktikabel. Es bietet sich an, auf technischem Weg diese VOCs zu analysieren.
Ein technisches Verfahren, diese VOCs zu unterscheiden, ist die Ionenmobilitätsspektrometrie gekoppelt mit einer multikapillaren Gaschromatographiesäule (MCC-IMS). Hier zeigte sich, dass es sich bei dem Verfahren um eine schnelle, sensitive und verlässliche Methode handelt.
Es ist bekannt, dass verschiedene Bakterien aufgrund des Metabolismus unterschiedliche VOCs und damit eigene spezifische Gerüche produzieren. Im ersten Schritt dieser Arbeit konnte gezeigt werden, dass die verschiedenen Bakterien in-vitro nach einer kurzen Inkubationszeitzeit von 90 Minuten anhand der VOCs differenziert werden können. Hier konnte analog zur Diagnose in biochemischen Testreihen eine hierarchische Klassifikation der Bakterien erfolgen.
Im Gegensatz zu Bakterien haben Viren keinen eigenen Stoffwechsel. Ob virusinfizierte Zellen andere VOCs als nicht-infizierte Zellen freisetzen, wurde an Zellkulturen überprüft. Hier konnte gezeigt werden, dass sich die Fingerprints der VOCs in Zellkulturen infizierter Zellen mit Respiratorischen Synzytial-Viren (RSV) von nicht-infizierten Zellen unterscheiden.
Virusinfektionen im intakten Organismus unterscheiden sich von den Zellkulturen dadurch, dass hier neben Veränderungen im Zellstoffwechsel auch durch Abwehrmechanismen VOCs freigesetzt werden können.
Zur Überprüfung, inwiefern sich Infektionen im intakten Organismus ebenfalls anhand VOCs unterscheiden lassen, wurde bei Patienten mit und ohne Nachweis einer Influenza A Infektion als auch bei Patienten mit Verdacht auf SARS-CoV-2 (Schweres-akutes-Atemwegssyndrom-Coronavirus Typ 2) Infektion die Atemluft untersucht. Sowohl Influenza-infizierte als auch SARS-CoV-2 infizierte Patienten konnten untereinander und von nicht-infizierten Patienten mittels MCC-IMS Analyse der Atemluft unterschieden werden.
Zusammenfassend erbringt die MCC-IMS ermutigende Resultate in der schnellen nichtinvasiven Erkennung von Infektionen sowohl in vitro als auch in vivo.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.