Refine
Has Fulltext
- yes (159) (remove)
Year of publication
- 2022 (159) (remove)
Document Type
- Doctoral Thesis (159) (remove)
Is part of the Bibliography
- yes (159)
Keywords
- Klimawandel (5)
- climate change (5)
- Digitalisierung (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- modelling (3)
- Adipositas (2)
- Arbeitszufriedenheit (2)
- Bewegungsökologie (2)
- Bundeswehr (2)
Institute
- Institut für Biochemie und Biologie (27)
- Extern (26)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Wirtschaftswissenschaften (7)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (5)
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
Giros Topográficos
(2022)
Giros topográficos explora las producciones simbólicas del espacio en una serie de textos narrativos publicados desde el cambio de milenio en América Latina. Retomando los planteos teóricos del spatial turn y de la geocrítica, el estudio aborda las topografías literarias desde cuatro ángulos que exceden y transforman los límites territoriales y nacionales: dinámicas de hiperconectividad mediática y movilidad acelerada; genealogías afectivas; ecologías urbanas; y representaciones de la alteridad.
A partir del análisis de obras de Lina Meruane, Guillermo Fadanelli, Andrés Neuman, Andrea Jeftanovic, Sergio Chejfech y Bernardo Carvalho, entre otros, el libro señala los flujos, ambigüedades y tensiones proyectadas por las nuevas comunidades imaginadas del s.XXI. Con ello, el ensayo busca ofrecer un aporte para repensar el estatus de la literatura latinoamericana en el marco de su globalización avanzada y la consecuente consolidación de espacios de enunciación translocalizados.
The negative impact of crude oil on the environment has led to a necessary transition toward alternative, renewable, and sustainable resources. In this regard, lignocellulosic biomass (LCB) is a promising renewable and sustainable alternative to crude oil for the production of fine chemicals and fuels in a so-called biorefinery process. LCB is composed of polysaccharides (cellulose and hemicellulose), as well as aromatics (lignin). The development of a sustainable and economically advantageous biorefinery depends on the complete and efficient valorization of all components. Therefore, in the new generation of biorefinery, the so-called biorefinery of type III, the LCB feedstocks are selectively deconstructed and catalytically transformed into platform chemicals. For this purpose, the development of highly stable and efficient catalysts is crucial for progress toward viability in biorefinery. Furthermore, a modern and integrated biorefinery relies on process and reactor design, toward more efficient and cost-effective methodologies that minimize waste. In this context, the usage of continuous flow systems has the potential to provide safe, sustainable, and innovative transformations with simple process integration and scalability for biorefinery schemes.
This thesis addresses three main challenges for future biorefinery: catalyst synthesis, waste feedstock valorization, and usage of continuous flow technology. Firstly, a cheap, scalable, and sustainable approach is presented for the synthesis of an efficient and stable 35 wt.-% Ni catalyst on highly porous nitrogen-doped carbon support (35Ni/NDC) in pellet shape. Initially, the performance of this catalyst was evaluated for the aqueous phase hydrogenation of LCB-derived compounds such as glucose, xylose, and vanillin in continuous flow systems. The 35Ni/NDC catalyst exhibited high catalytic performances in three tested hydrogenation reactions, i.e., sorbitol, xylitol, and 2-methoxy-4-methylphenol with yields of 82 mol%, 62 mol%, and 100 mol% respectively. In addition, the 35Ni/NDC catalyst exhibited remarkable stability over a long time on stream in continuous flow (40 h). Furthermore, the 35Ni/NDC catalyst was combined with commercially available Beta zeolite in a dual–column integrated process for isosorbide production from glucose (yield 83 mol%).
Finally, 35Ni/NDC was applied for the valorization of industrial waste products, namely sodium lignosulfonate (LS) and beech wood sawdust (BWS) in continuous flow systems. The LS depolymerization was conducted combining solvothermal fragmentation of water/alcohol mixtures (i.e.,methanol/water and ethanol/water) with catalytic hydrogenolysis/hydrogenation (SHF). The depolymerization was found to occur thermally in absence of catalyst with a tunable molecular weight according to temperature. Furthermore, the SHF generated an optimized cumulative yield of lignin-derived phenolic monomers of 42 mg gLS-1. Similarly, a solvothermal and reductive catalytic fragmentation (SF-RCF) of BWS was conducted using MeOH and MeTHF as a solvent. In this case, the optimized total lignin-derived phenolic monomers yield was found of 247 mg gKL-1.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
Ein schonender Umgang mit den Ressourcen und der Umwelt ist wesentlicher Bestandteil des modernen Bergbaus sowie der zukünftigen Versorgung unserer Gesellschaft mit essentiellen Rohstoffen. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung analytischer Strategien, die durch eine exakte und schnelle Vor-Ort-Analyse den technisch-praktischen Anforderungen des Bergbauprozesses gerecht werden und somit zu einer gezielten und nachhaltigen Nutzung von Rohstofflagerstätten beitragen. Die Analysen basieren auf den spektroskopischen Daten, die mittels der laserinduzierten Breakdownspektroskopie (LIBS) erhalten und mittels multivariater Datenanalyse ausgewertet werden. Die LIB-Spektroskopie ist eine vielversprechende Technik für diese Aufgabe. Ihre Attraktivität machen insbesondere die Möglichkeiten aus, Feldproben vor Ort ohne Probennahme oder ‑vorbereitung messen zu können, aber auch die Detektierbarkeit sämtlicher Elemente des Periodensystems und die Unabhängigkeit vom Aggregatzustand. In Kombination mit multivariater Datenanalyse kann eine schnelle Datenverarbeitung erfolgen, die Aussagen zur qualitativen Elementzusammensetzung der untersuchten Proben erlaubt. Mit dem Ziel die Verteilung der Elementgehalte in einer Lagerstätte zu ermitteln, werden in dieser Arbeit Kalibrierungs- und Quantifizierungsstrategien evaluiert. Für die Charakterisierung von Matrixeffekten und zur Klassifizierung von Mineralen werden explorative Datenanalysemethoden angewendet. Die spektroskopischen Untersuchungen erfolgen an Böden und Gesteinen sowie an Mineralen, die Kupfer oder Seltene Erdelemente beinhalten und aus verschiedenen Lagerstätten bzw. von unterschiedlichen Agrarflächen stammen.
Für die Entwicklung einer Kalibrierungsstrategie wurden sowohl synthetische als auch Feldproben von zwei verschiedenen Agrarflächen mittels LIBS analysiert. Anhand der Beispielanalyten Calcium, Eisen und Magnesium erfolgte die auf uni- und multivariaten Methoden beruhende Evaluierung verschiedener Kalibrierungsmethoden. Grundlagen der Quantifizierungsstrategien sind die multivariaten Analysemethoden der partiellen Regression der kleinsten Quadrate (PLSR, von engl.: partial least squares regression) und der Intervall PLSR (iPLSR, von engl.: interval PLSR), die das gesamte detektierte Spektrum oder Teilspektren in der Analyse berücksichtigen. Der Untersuchung liegen synthetische sowie Feldproben von Kupfermineralen zugrunde als auch solche die Seltene Erdelemente beinhalten. Die Proben stammen aus verschiedenen Lagerstätten und weisen unterschiedliche Begleitmatrices auf. Mittels der explorativen Datenanalyse erfolgte die Charakterisierung dieser Begleitmatrices. Die dafür angewendete Hauptkomponentenanalyse gruppiert Daten anhand von Unterschieden und Regelmäßigkeiten. Dies erlaubt Aussagen über Gemeinsamkeiten und Unterschiede der untersuchten Proben im Bezug auf ihre Herkunft, chemische Zusammensetzung oder lokal bedingte Ausprägungen. Abschließend erfolgte die Klassifizierung kupferhaltiger Minerale auf Basis der nicht-negativen Tensorfaktorisierung. Diese Methode wurde mit dem Ziel verwendet, unbekannte Proben aufgrund ihrer Eigenschaften in Klassen einzuteilen.
Die Verknüpfung von LIBS und multivariater Datenanalyse bietet die Möglichkeit durch eine Analyse vor Ort auf eine Probennahme und die entsprechende Laboranalytik weitestgehend zu verzichten und kann somit zum Umweltschutz sowie einer Schonung der natürlichen Ressourcen bei der Prospektion und Exploration von neuen Erzgängen und Lagerstätten beitragen. Die Verteilung von Elementgehalten der untersuchten Gebiete ermöglicht zudem einen gezielten Abbau und damit eine effiziente Nutzung der mineralischen Rohstoffe.
Respiratorische Erkrankungen stellen zunehmend eine relevante globale Problematik dar. Die Erweiterung bzw. Modifizierung von Applikationswegen möglicher Arzneimittel für gezielte topische Anwendungen ist dabei von größter Bedeutung. Die Variation eines bekannten Applikationsweges durch unterschiedliche technologische Umsetzungen kann die Vielfalt der Anwendungsmöglichkeiten, aber auch die Patienten-Compliance erhöhen. Die einfache und flexible Verfahrensweise durch schnelle Verfügbarkeit und eine handliche Technologie sind heutzutage wichtige Eigenschaften im Entwicklungsprozess eines Produktes. Eine direkte topische Behandlung von Atemwegserkrankungen am Wirkort in Form einer inhalativen Applikation bietet dabei viele Vorteile gegenüber einer systemischen Therapie. Die medizinische Inhalation von Wirkstoffen über die Lunge ist jedoch eine komplexe Herausforderung. Inhalatoren gehören zu den erklärungsbedürftigen Applikationsformen, die zur Erhöhung der konsequenten Einhaltung der Verordnung so einfach, wie möglich gestaltet werden müssen. Parallel besitzen und nutzen weltweit annähernd 68 Millionen Menschen die Technologie eines inhalativen Applikators zur bewussten Schädigung ihrer Gesundheit in Form einer elektronischen Zigarette. Diese bekannte Anwendung bietet die potentielle Möglichkeit einer verfügbaren, kostengünstigen und qualitätsgeprüften Gesundheitsmaßnahme zur Kontrolle, Prävention und Heilung von Atemwegserkrankungen. Sie erzeugt ein Aerosol durch elektrothermische Erwärmung eines sogenannten Liquids, das durch Kapillarkräfte eines Trägermaterials an ein Heizelement gelangt und verdampft. Ihr Bekanntheitsgrad zeigt, dass eine beabsichtigte Wirkung in den Atemwegen eintritt. Diese Wirkung könnte jedoch auch auf potentielle pharmazeutische Einsatzgebiete übertragbar sein. Die Vorteile der pulmonalen Verabreichung sind dabei vielfältig. Im Vergleich zur peroralen Applikation gelangt der Wirkstoff gezielt zum Wirkort. Wenn eine systemische Applikation zu Arzneimittelkonzentrationen unterhalb der therapeutischen Wirksamkeit in der Lunge führt, könnte eine inhalative Darreichung bereits bei niedriger Dosierung die gewünschten höheren Konzentrationen am Wirkort hervorrufen. Aufgrund der großen Resorptionsfläche der Lunge sind eine höhere Bioverfügbarkeit und ein schnellerer Wirkungseintritt infolge des fehlenden First-Pass-Effektes möglich. Es kommt ebenfalls zu minimalen systemischen Nebenwirkungen. Die elektronische Zigarette erzeugt wie die medizinischen Inhalatoren lungengängige Partikel. Die atemzuggesteuerte Technik ermöglicht eine unkomplizierte und intuitive Anwendung. Der prinzipielle Aufbau besteht aus einer elektrisch beheizten Wendel und einem Akku. Die Heizwendel ist von einem sogenannten Liquid in einem Tank umgeben und erzeugt das Aerosol. Das Liquid beinhaltet eine Basismischung bestehend aus Propylenglycol, Glycerin und reinem Wasser in unterschiedlichen prozentualen Anteilen. Es besteht die Annahme, dass das Basisliquid auch mit pharmazeutischen Wirkstoffen für die pulmonale Applikation beladen werden kann. Aufgrund der thermischen Belastung durch die e-Zigarette müssen potentielle Wirkstoffe sowie das Vehikel eine thermische Stabilität aufweisen.
Die potentielle medizinische Anwendung der Technologie einer handelsüblichen e-Zigarette wurde anhand von drei Schwerpunkten an vier Wirkstoffen untersucht. Die drei ätherischen Öle Eucalyptusöl, Minzöl und Nelkenöl wurden aufgrund ihrer leichten Flüchtigkeit und der historischen pharmazeutischen Anwendung anhand von Inhalationen bei Erkältungssymptomen bzw. im zahnmedizinischen Bereich gewählt. Das eingesetzte Cannabinoid Cannabidiol (CBD) hat einen aktuellen Bezug zu dem pharmazeutischen Markt Deutschlands zur Legalisierung von cannabishaltigen Produkten und der medizinischen Forschung zum inhalativen Konsum. Es wurden relevante wirkstoffhaltige Flüssigformulierungen entwickelt und hinsichtlich ihrer Verdampfbarkeit zu Aerosolen bewertet. In den quantitativen und qualitativen chromatographischen Untersuchungen konnten spezifische Verdampfungsprofile der Wirkstoffe erfasst und bewertet werden. Dabei stieg die verdampfte Masse der Leitsubstanzen 1,8-Cineol (Eucalyptusöl), Menthol (Minzöl) und Eugenol (Nelkenöl) zwischen 33,6 µg und 156,2 µg pro Zug proportional zur Konzentration im Liquid im Bereich zwischen 0,5% und 1,5% bei einer Leistung von 20 Watt. Die Freisetzungsrate von Cannabidiol hingegen schien unabhängig von der Konzentration im Liquid im Mittelwert bei 13,3 µg pro Zug zu liegen. Dieses konnte an fünf CBD-haltigen Liquids im Konzentrationsbereich zwischen 31 µg/g und 5120 µg/g Liquid gezeigt werden. Außerdem konnte eine Steigerung der verdampften Massen mit Zunahme der Leistung der e-Zigarette festgestellt werden. Die Interaktion der Liquids bzw. Aerosole mit den Bestandteilen des Speichels sowie weiterer gastrointestinaler Flüssigkeiten wurde über die Anwendung von zugehörigen in vitro Modellen und Einsatz von Enzymaktivitäts-Assays geprüft. In den Untersuchungen wurden Änderungen von Enzymaktivitäten anhand des oralen Schlüsselenzyms α-Amylase sowie von Proteasen ermittelt. Damit sollte exemplarisch ein möglicher Einfluss auf physiologische bzw. metabolische Prozesse im humanen Organismus geprüft werden. Das Bedampfen von biologischen Suspensionen führte bei niedriger Leistung der e-Zigarette (20 Watt) zu keiner bzw. einer leichten Änderung der Enzymaktivität. Die Anwendung einer hohen Leistung (80 Watt) bewirkte tendenziell das Herabsetzen der Enzymaktivitäten. Die Erhöhung der Enzymaktivitäten könnte zu einem enzymatischen Abbau von Schleimstoffen wie Mucinen führen, was wiederum die effektive, mechanische Abwehr gegenüber bakteriellen Infektionen zur Folge hätte. Da eine Anwendung der Applikation insbesondere bei bakteriellen Atemwegserkrankungen denkbar wäre, folgten abschließend Untersuchungen der antibakteriellen Eigenschaften der Liquids bzw. Aerosole in vitro. Es wurden sechs klinisch relevante bakterielle Krankheitserreger ausgewählt, die nach zwei Charakteristika gruppiert werden können. Die drei multiresistenten Bakterien Pseudomonas aeruginosa, Klebsiella pneumoniae und Methicillin-resistenter Staphylococcus aureus können mithilfe von üblichen Therapien mit Antibiotika nicht abgetötet werden und haben vor allem eine nosokomiale Relevanz. Die zweite Gruppe weist Eigenschaften auf, die vordergründig assoziiert sind mit respiratorischen Erkrankungen. Die Bakterien Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae sind repräsentativ beteiligt an Atemwegserkrankungen mit diverser Symptomatik. Die Bakterienarten wurden mit den jeweiligen Liquids behandelt bzw. bedampft und deren grundlegende Dosis-Wirkungsbeziehung charakterisiert. Dabei konnte eine antibakterielle Aktivität der Formulierungen ermittelt werden, die durch Zugabe eines Wirkstoffes die bereits antibakterielle Wirkung der Bestandteile Glycerin und Propylenglycol verstärkte. Die hygroskopischen Eigenschaften dieser Substanzen sind vermutlich für eine Wirkung in aerosolierter Form verantwortlich. Sie entziehen die Feuchtigkeit aus der Luft und haben einen austrocknenden Effekt auf die Bakterien. Das Bedampfen der Bakterienarten Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae hatte einen antibakteriellen Effekt, der zeitlich abhängig von der Leistung der e-Zigarette war.
Die Ergebnisse der Untersuchungen führen zu dem Schluss, dass jeder Wirkstoff bzw. jede Substanzklasse individuell zu bewerten ist und somit Inhalator und Formulierung aufeinander abgestimmt werden müssen. Der Einsatz der e-Zigarette als Medizinprodukt zur Applikation von Arzneimitteln setzt stets Prüfungen nach Europäischem Arzneibuch voraus. Durch Modifizierungen könnte eine Dosierung gut kontrollierbar gemacht werden, aber auch die Partikelgrößenverteilung kann insoweit reguliert werden, dass die Wirkstoffe je nach Partikelgröße zu einem geeigneten Applikationsort wie Mund, Rachen oder Bronchien transportiert werden. Der Vergleich mit den Eigenschaften anderer medizinischer Inhalatoren führt zu dem Schluss, dass die Technologie der e-Zigarette durchaus eine gleichartige oder bessere Performance für thermisch stabile Wirkstoffe bieten könnte. Dieses fiktive Medizinprodukt könnte aus einer hersteller-unspezifisch produzierten, wieder aufladbaren Energiequelle mit Universalgewinde zum mehrfachen Gebrauch und einer hersteller- und wirkstoffspezifisch produzierten Einheit aus Verdampfer und Arzneimittel bestehen. Das Arzneimittel, ein medizinisches Liquid (Vehikel und Wirkstoff) kann in dem Tank des Verdampfers mit konstanten, nicht variablen Parametern patientenindividuell produziert werden. Inhalative Anwendungen werden perspektivisch wohl nicht zuletzt aufgrund der aktuellen COVID-19-Pandemie eine zunehmende Rolle spielen. Der Bedarf nach alternativen Therapieoptionen wird weiter ansteigen. Diese Arbeit liefert einen Beitrag zum Einsatz der Technologie der elektronischen Zigarette als electronic nicotin delivery system (ENDS) nach Modifizierung zu einem potentiellen pulmonalen Applikationssystem als electronic drug delivery system (EDDS) von inhalativen, thermisch stabilen Arzneimitteln in Form eines Medizinproduktes.
Die Arbeit gibt einen Einblick in die Verständigungspraxen bei Stadtführungen mit (ehemaligen) Obdachlosen, die in ihrem Selbstverständnis auf die Herstellung von Verständnis, Toleranz und Anerkennung für von Obdachlosigkeit betroffene Personen zielen. Zunächst wird in den Diskurs des Slumtourismus eingeführt und, angesichts der Vielfalt der damit verbundenen Erscheinungsformen, Slumming als organisierte Begegnung mit sozialer Ungleichheit definiert. Die zentralen Diskurslinien und die darin eingewobenen moralischen Positionen werden nachvollzogen und im Rahmen der eigenommenen wissenssoziologischen Perspektive als Ausdruck einer per se polykontexturalen Praxis re-interpretiert. Slumming erscheint dann als eine organisierte Begegnung von Lebensformen, die sich in einer Weise fremd sind, als dass ein unmittelbares Verstehen unwahrscheinlich erscheint und genau aus diesem Grund auf der Basis von gängigen Interpretationen des Common Sense ausgehandelt werden muss. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie sich Teilnehmer und Stadtführer über die Erfahrung der Obdachlosigkeit praktisch verständigen und welcher Art das hierüber erzeugte Verständnis für die im öffentlichen Diskurs mit vielfältigen stigmatisierenden Zuschreibungen versehenen Obdachlosen ist. Dabei interessiert besonders, in Bezug auf welche Aspekte der Erfahrung von Obdachlosigkeit ein gemeinsames Verständnis möglich wird und an welchen Stellen dieses an Grenzen gerät. Dazu wurden die Gesprächsverläufe auf neun Stadtführungen mit (ehemaligen) obdachlosen Stadtführern unterschiedlicher Anbieter im deutschsprachigen Raum verschriftlicht und mit dem Verfahren der Dokumentarischen Methode ausgewertet. Die vergleichende Betrachtung der Verständigungspraxen eröffnet nicht zuletzt eine differenzierte Perspektive auf die in den Prozessen der Verständigung immer schon eingewobenen Anerkennungspraktiken. Mit Blick auf die moralische Debatte um organisierte Begegnungen mit sozialer Ungleichheit wird dadurch eine ethische Perspektive angeregt, in deren Zentrum Fragen zur Vermittlungsarbeit stehen.
Struggle for existence
(2022)
In this project, I sought to understand how Palestinian claim-making in the West Bank is possible within the context of continuing Israeli occupation and repression by the Palestinian political leadership. I explored the questions of what channels non-state actors use to advance their claims, what opportunities they have for making these claims, and what challenges they face. This exploration covers the time period from the Oslo Accords in the mid-1990s to the so-called Great March of Return in 2018.
I demonstrated that Palestinians used different modes and strategies of resistance in the past century, as the area of what today is Israel/Palestine has historically been a target for foreign penetration. Yet, the Oslo agreements between the Israeli government and the Palestinian leadership have ended Palestinians’ decentralized and pluralist social governance, reinforced Israeli rule in the Palestinian territories, promoted continuing dispossession and segregation of Palestinians, and further restricted their rights and their claim-making opportunities until this day. Therefore, today, Palestinian society in the West Bank is characterized by fragmentation, geographical and societal segregation, and double repression by Israeli occupation and Palestinian Authority (PA) policies. What is more, Palestinian claim-making is legally curtailed due to the establishment of different geographical entities in which Palestinians are subjugated to different forms of Israeli rule and regulations.
I argue that the concepts of civil society and acts of citizenship, which are often used to describe non-state actors’ rights-seeking activities, fall short on understanding and describing Palestinian claim-making in the West Bank comprehensively. By determining their boundaries, the concept of acts of subjecthood evolved as a novel theoretical approach within the research process and as a means of claim-making within repressive contexts where claim makers’ rights are curtailed and opportunities for rights-seeking activities are few. Thereby, this study applies a new theoretical framework to the conflict in Israel/Palestine and contributes to a better understanding of rights-seeking activities within the West Bank. Further, I argue that Palestinian acts of subjecthood against hostile Israeli rule in the West Bank are embedded within the comprehensive structure of settler colonialism. As a form of colonialism that aims at replacing an indigenous population, Israeli settler colonialism in the West Bank manifests itself in restrictions of Palestinian movement, settlement constructions, home demolitions, violence, and detentions.
By using grounded theory and inductive reasoning as methodological approaches, I was able to make generalizations about the state of Palestinian claim-making. These generalizations are based on the analysis of secondary materials and data collected via face-to-face and video interviews with non-state actors in Israel/Palestine. The conducted research shows that there is not a single measure or a standalone condition that hinders Palestinian claim-making, but a complex and comprehensive structure that, on the one hand, shrinks Palestinian living space by occupation and destruction and, on the other hand, diminishes Palestinian civic space by limiting the fundamental rights to organize and build social movements to change the status Palestinians live in.
Although the concrete, tangible outcomes of Palestinian acts of subjecthood are marginal, they contribute to strengthening and perpetuating Palestinian’s long history of resistance against Israeli oppression. With a lack of adherence to international law, the neglect of UN resolutions by the Israeli government, the continuous defeats of rights organizations in Israeli courts, and the repression of institutions based in the West Bank by PA and occupation policies, Palestinian acts of subjecthood cannot overturn current power structures. Nevertheless, the ongoing persistence of non-state actors claiming rights, as well as the pop-up of new initiatives and youth movements are all essential for strengthening Palestinians’ resilience and documenting current injustices. Therefore, they can build the pillars for social change in the future.
Das Ziel der vorliegenden Dissertation war es zu untersuchen, wie palästinensisches claim-making, also die Artikulation von Forderungen bzw. die Geltendmachung von bestimmten Rechten, vor dem Hintergrund der anhaltenden israelischen Besatzung und Repressalien durch die palästinensische politische Führung im Westjordanland durchgesetzt werden kann. Dabei soll der Frage nachgegangen werden, welche Kanäle nichtstaatliche Akteure nutzen, um ihre Ansprüche geltend zu machen, welche Möglichkeiten sich ihnen dafür bieten und vor welchen Herausforderungen sie stehen. Der Untersuchungszeitraum erstreckt sich dabei vom Osloer Friedensprozess Mitte der 1990er Jahre bis hin zum sogenannten Great March of Return im Jahr 2018.
Die im Gebiet des heutigen Israel/Palästina lebenden PalästinenserInnen bedienten sich in Zeiten ausländischer Einflussnahme, z.B. während der britischen Besatzung im vergangenen Jahrhundert, verschiedenster Widerstandsformen und -strategien. Jedoch haben die Osloer Abkommen zwischen der israelischen Regierung und der palästinensischen Führung die dezentrale und partizipative Mobilisierung der palästinensischen Gesellschaft erschwert, die andauernde Enteignung von PalästinenserInnen begünstigt und ihre Rechte bis zum heutigen Tag weiter eingeschränkt. Die heutige palästinensische Gesellschaft im Westjordanland ist daher durch Zersplitterung, geografische und gesellschaftliche Segregation und doppelte Un-terdrückung durch die israelische Besatzung sowie die Palästinensische Autonomiebehörde gekennzeichnet. Zudem führt die Etablierung verschiedener geografischer Entitäten, in denen PalästinenserInnen unterschiedlichen Formen israelischer Herrschaft, Regularien und Ein-griffsrechten unterworfen sind, dazu, dass palästinensisches claim-making auch formalrecht-lich eingeschränkt ist.
Um die Aktivitäten nichtstaatlicher Akteure in diesem Kontext beschreiben zu können, wer-den häufig das Konzept der Zivilgesellschaft oder das der acts of citizenship herangezogen. In der vorliegenden Arbeit wird jedoch argumentiert, dass diese Konzepte nur bedingt auf den Status Quo im Westjordanland anwendbar sind und palästinensisches claim-making nicht hinreichend verstehen und beschreiben können. Im Laufe des Forschungsprozesses hat sich daher das Konzept der acts of subjecthood als neuer theoretischer Ansatz herausgebildet, der claim-making in repressiven Kontexten beschreibt, in denen nichtstaatliche Akteure nur geringen Handlungsspielraum haben, ihre Forderungen durchsetzen zu können. Durch diese „Theorie-Brille“ ermöglicht meine Forschung einen neuartigen Blick auf den israelisch-palästinensischen Konflikt und trägt auf diese Weise zu einem besseren Verständnis von claim-making-Aktivitäten im Westjordanland bei. Darüber hinaus bettet die vorliegende Ar-beit acts of subjecthood in den größeren Kontext des Siedlungskolonialismus ein. Dieser beschreibt eine Form des Kolonialismus, die darauf abzielt, eine einheimische Bevölkerung durch die der Kolonialmacht zu ersetzen. Im Westjordanland manifestiert sich der israelische Siedlungskolonialismus in der Einschränkung der Bewegungsfreiheit von PalästinenserIn-nen, dem Bau von Siedlungen, der Zerstörung von Häusern, Gewalt und Inhaftierungen.
Die Verwendung der Grounded Theory und des induktiven Denkens als methodische Ansätze ermöglichte es, verallgemeinerbare Aussagen zum Zustand palästinensischen claim-makings treffen zu können. Diese Verallgemeinerungen beruhen auf der Analyse von Sekundärquellen und Daten, die im Rahmen von Interviews mit VertreterInnen nichtstaatlicher Organisationen in Israel/Palästina erhoben wurden. Die durchgeführte Analyse macht deutlich, dass nicht eine einzelne Maßnahme oder Bedingung palästinensisches claim-making behindert, sondern eine komplexe, vielschichtige und zielgerichtet implementierte Struktur. Diese verringert einerseits den Lebensraum von PalästinenserInnen durch Besatzung und Zerstörung und schränkt andererseits den zivilen Raum ein, indem sie ihnen grundlegende Rechte und fundamentale Freiheiten verwehrt.
Obwohl die konkreten Auswirkungen palästinensischer acts of subjecthood marginal sind, tragen sie dazu bei, den Widerstand gegen politische Unterdrückung zu stärken und fortzusetzen. Angesichts der Verletzung von Völkerrecht und der Missachtung zahlreicher UN-Resolutionen durch die israelische Regierung, der Niederlagen von Menschenrechtsorganisationen vor israelischen Gerichten, der Unterdrückung von Institutionen im Westjordanland durch die Palästinensische Autonomiebehörde und die Besatzungspolitik können acts of subjecthood die derzeitigen Machtstrukturen nicht aufbrechen. Dennoch sind die anhaltende Beharrlichkeit nichtstaatlicher Akteure, Forderungen zu artikulieren und Rechte einzufordern und die Gründung neuer Initiativen und Organisationen essenziell für die Stärkung gesellschaftlicher Resilienz sowie die Dokumentation von Ungerechtigkeiten und Rechtsverletzungen. Diese Akteure legen so den Grundstein für einen möglichen gesellschaftspolitischen Wandel in der Zukunft.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Nation, migration, narration
(2022)
En France et en Allemagne, l’immigration est devenue dans les dernières décennies une problématique centrale. C’est dans ce contexte qu’est apparu le rap. Celui-ci connaît une popularité énorme chez les populations issues de l’immigration. Pour autant, les rappeurs ne s’en confrontent pas moins à leur identité française ou allemande.
Le but de ce travail est d’expliquer cette apparente contradiction : comment des personnes issues de l’immigration, exprimant un mal-être face à un racisme qu’ils considèrent omniprésent, peuvent-elles se sentir pleinement françaises / allemandes ?
On a divisé le travail entre les chapitres suivants : Contexte de l'étude, méthodologie et théories (I) ; Analyse des différentes formes d’identité nationale au prisme du corpus (II) ; Analyse en trois étapes chronologiques du rapport à la société dans les textes des rappeurs (III-V) ; étude de cas de Kery James en France et Samy Deluxe en Allemagne (VI).
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Synthetische Transkriptionsfaktoren bestehen wie natürliche Transkriptionsfaktoren aus einer DNA-Bindedomäne, die sich spezifisch an die Bindestellensequenz vor dem Ziel-Gen anlagert, und einer Aktivierungsdomäne, die die Transkriptionsmaschinerie rekrutiert, sodass das Zielgen exprimiert wird. Der Unterschied zu den natürlichen Transkriptionsfaktoren ist, sowohl dass die DNA-Bindedomäne als auch die Aktivierungsdomäne wirtsfremd sein können und dadurch künstliche Stoffwechselwege im Wirt, größtenteils chemisch, induziert werden können. Optogenetische synthetische Transkriptionsfaktoren, die hier entwickelt wurden, gehen einen Schritt weiter. Dabei ist die DNA-Bindedomäne nicht mehr an die Aktivierungsdomäne, sondern mit dem Blaulicht-Photorezeptor CRY2 gekoppelt. Die Aktivierungsdomäne wurde mit dem Interaktionspartner CIB1 fusioniert. Unter Blaulichtbestrahlung dimerisieren CRY2 und CIB1 und damit einhergehend die beiden Domänen, sodass ein funktionsfähiger Transkriptionsfaktor entsteht. Dieses System wurde in die Saccharomyces cerevisiae genomisch integriert. Verifiziert wurde das konstruierte System mit Hilfe des Reporters yEGFP, welcher durchflusszytometrisch detektiert werden konnte. Es konnte gezeigt werden, dass die yEGFP Expression variabel gestaltet werden kann, indem unterschiedlich lange Blaulichtimpulse ausgesendet wurden, die DNA-Bindedomäne, die Aktivierungsdomäne oder die Anzahl der Bindestellen, an dem sich die DNA-Bindedomäne anlagert, verändert wurden. Um das System für industrielle Anwendungen attraktiv zu gestalten, wurde das System vom Deepwell-Maßstab auf Photobioreaktor-Maßstab hochskaliert. Außerdem erwies sich das Blaulichtsystem sowohl im Laborstamm YPH500 als auch im industriell oft verwendeten Hefestamm CEN.PK als funktional. Des Weiteren konnte ein industrierelevante Protein ebenso mit Hilfe des verifizierten Systems exprimiert werden. Schlussendlich konnte in dieser Arbeit das etablierte Blaulicht-System erfolgreich mit einem Rotlichtsystem kombiniert werden, was zuvor noch nicht beschrieben wurde.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Aldehyde oxidases (AOXs) (E.C. 1.2.3.1) are molybdoflavo-enzymes belonging to the xanthine oxidase (XO) family. AOXs in mammals contain one molybdenum cofactor (Moco), one flavin adenine dinucleotide (FAD) and two [2Fe-2S] clusters, the presence of which is essential for the activity of the enzyme. Human aldehyde oxidase (hAOX1) is a cytosolic enzyme mainly expressed in the liver. hAOX1is involved in the metabolism of xenobiotics. It oxidizes aldehydes to their corresponding carboxylic acids and hydroxylates N-heterocyclic compounds. Since these functional groups are widely present in therapeutics, understanding the behaviour of hAOX1 has important implications in medicine. During the catalytic cycle of hAOX1, the substrate is oxidized at Moco and electrons are internally transferred to FAD via the FeS clusters. An electron acceptor juxtaposed to the FAD receives the electrons and re-oxidizes the enzyme for the next catalytic cycle. Molecular oxygen is the endogenous electron acceptor of hAOX1 and in doing so it is reduced and produces reactive oxygen species (ROS) including hydrogen peroxide (H2O2) and superoxide (O2.-). The production of ROS has patho-physiological importance, as ROS can have a wide range of effects on cell components including the enzyme itself.
In this thesis, we have shown that hAOX1 loses its activity over multiple cycles of catalysis due to endogenous ROS production and have identified a cysteine rich motif that protects hAOX1 from the ROS damaging effects. We have also shown that a sulfido ligand, which is bound at Moco and is essential for the catalytic activity of the enzyme, is vulnerable during turnover. The ROS produced during the course of the reaction are also able to remove this sulfido ligand from Moco. ROS, in addition, oxidize particular cysteine residues. The combined effects of ROS on the sulfido ligand and on specific cysteine residues in the enzyme result in its inactivation. Furthermore, we report that small reducing agents containing reactive sulfhydryl groups, in a selective manner, inactivate some of the mammalian AOXs by modifying the sulfido ligand at Moco. The mechanism of ROS production by hAOX1 is another scope that has been investigated as part of the work in this thesis. We have shown that the ratio of type of ROS, i.e. hydrogen peroxide (H2O2) and superoxide (O2.-), produced by hAOX1 is determined by a particular position on a flexible loop that locates in close proximity of FAD. The size of the cavity at the ROS producing site, i.e. the N5 position of the FAD isoalloxazine ring, kinetically affects the amount of each type of ROS generated by hAOX1. Taken together, hAOX1 is an enzyme with emerging importance in pharmacological and medical studies, not only due to its involvement in drug metabolism, but also due to ROS production which has physiological and pathological implications.
Hydraulic-driven fractures play a key role in subsurface energy technologies across several scales. By injecting fluid at high hydraulic pressure into rock with intrinsic low permeability, in-situ stress field and fracture development pattern can be characterised as well as rock permeability can be enhanced. Hydraulic fracturing is a commercial standard procedure for enhanced oil and gas production of rock reservoirs with low permeability in petroleum industry. However, in EGS utilization, a major geological concern is the unsolicited generation of earthquakes due to fault reactivation, referred to as induced seismicity, with a magnitude large enough to be felt on the surface or to damage facilities and buildings. Furthermore, reliable interpretation of hydraulic fracturing tests for stress measurement is a great challenge for the energy technologies. Therefore, in this cumulative doctoral thesis the following research questions are investigated. (1): How do hydraulic fractures grow in hard rock at various scales?; (2): Which parameters control hydraulic fracturing and hydro-mechanical coupling?; and (3): How can hydraulic fracturing in hard rock be modelled?
In the laboratory scale study, several laboratory hydraulic fracturing experiments are investigated numerically using Irazu2D that were performed on intact cubic Pocheon granite samples from South Korea applying different injection protocols. The goal of the laboratory experiments is to test the concept of cyclic soft stimulation which may enable sustainable permeability enhancement (Publication 1).
In the borehole scale study, hydraulic fracturing tests are reported that were performed in boreholes located in central Hungary to determine the in-situ stress for a geological site investigation. At depth of about 540 m, the recorded pressure versus time curves in mica schist with low dip angle foliation show atypical evolution. In order to provide explanation for this observation, a series of discrete element computations using Particle Flow Code 2D are performed (Publication 2).
In the reservoir scale study, the hydro-mechanical behaviour of fractured crystalline rock due to one of the five hydraulic stimulations at the Pohang Enhanced Geothermal site in South Korea is studied. Fluid pressure perturbation at faults of several hundred-meter lengths during hydraulic stimulation is simulated using FracMan (Publication 3).
The doctoral research shows that the resulting hydraulic fracturing geometry will depend “locally”, i.e. at the length scale of representative elementary volume (REV) and below that (sub-REV), on the geometry and strength of natural fractures, and “globally”, i.e. at super-REV domain volume, on far-field stresses. Regarding hydro-mechanical coupling, it is suggested to define separate coupling relationship for intact rock mass and natural fractures. Furthermore, the relative importance of parameters affecting the magnitude of formation breakdown pressure, a parameter characterising hydro-mechanical coupling, is defined. It can be also concluded that there is a clear gap between the capacity of the simulation software and the complexity of the studied problems. Therefore, the computational time of the simulation of complex hydraulic fracture geometries must be reduced while maintaining high fidelity simulation results. This can be achieved either by extending the computational resources via parallelization techniques or using time scaling techniques. The ongoing development of used numerical models focuses on tackling these methodological challenges.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
As climate change worsens, there is a growing urgency to promote renewable energies and improve their accessibility to society. Here, solar energy harvesting is of particular importance. Currently, metal halide perovskite (MHP) solar cells are indispensable in future solar energy generation research. MHPs are crystalline semiconductors increasingly relevant as low-cost, high-performance materials for optoelectronics. Their processing from solution at low temperature enables easy fabrication of thin film elements, encompassing solar cells and light-emitting diodes or photodetectors. Understanding the coordination chemistry of MHPs in their precursor solution would allow control over the thin film crystallization, the material properties and the final device performance.
In this work, we elaborate on the key parameters to manipulate the precursor solution with the long-term objective of enabling systematic process control. We focus on the nanostructural characterization of the initial arrangements of MHPs in the precursor solutions. Small-angle scattering is particularly well suited for measuring nanoparticles in solution. This technique proved to be valuable for the direct analyzes of perovskite precursor solutions in standard processing concentrations without causing radiation damage. We gain insights into the chemical nature of widely used precursor structures such as methylammonium lead iodide (MAPbI3), presenting first insights into the complex arrangements and interaction within this precursor state. Furthermore, we transfer the preceding results to other more complex perovskite precursors. The influence of compositional engineering is investigated using the addition of alkali cations as an example. As a result, we propose a detailed working mechanism on how the alkali cations suppress the formation of intermediate phases and improve the quality of the crystalline thin film. In addition, we investigate the crystallization process of a tin-based perovskite composition (FASnI3) under the influence of fluoride chemistry. We prove that the frequently used additive, tin fluoride (SnF2), selectively binds undesired oxidized tin (Sn(IV)) in the precursor solution. This prevents its incorporation into the actual crystal structure and thus reduces the defect density of the material. Furthermore, SnF2 leads to a more homogeneous crystal growth process, which results in improved crystal quality of the thin film material.
In total, this study provides a detailed characterization of the complex system of perovskite precursor chemistry. We thereby cover relevant parameters for future MHP solar cell process control, such as (I) the environmental impact based on concentration and temperature (II) the addition of counter ions to reduce the diffuse layer surrounding the precursor nanostructures and (III) the targeted use of additives to eliminate unwanted components selectively and to ensure a more homogeneous crystal growth.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.
Im Rahmen dieser Dissertation wurde der Sauerstoff im Grundgerüst der [1,3]-Dioxolo[4.5-f]benzodioxol-Fluoreszenzfarbstoffe (DBD-Fluoreszenzfarbstoffe) vollständig mit Schwefel ausgetauscht und daraus eine neue Klasse von Fluoreszenzfarbstoffen entwickelt, die Benzo[1,2-d:4,5-d']bis([1,3]dithiol)-Fluorophore (S4-DBD-Fluorophore). Insgesamt neun der besonders interessanten, difunktionalisierten Vertreter konnten synthetisiert werden, die sich in ihren elektronenziehenden Gruppen und in ihrer Anordnung unterschieden.
Durch den Austausch von Sauerstoff mit Schwefel kam es zu teilweise auffälligen Veränderungen in den Fluoreszenzparametern, wie eine Abnahme der Fluoreszenzquantenausbeuten und -lebenszeiten aber auch eine deutliche Rotverschiebung in den Absorptions- und Emissionswellenlängen mit großen STOKES-Verschiebungen. Damit sind die S4-DBD-Fluorophore eine wertvolle Ergänzung für die DBD-Farbstoffe.
Die Ursachen für die Abnahme der Lebenszeiten und Quantenausbeuten konnte auf eine hohe Besetzung des Triplett-Zustandes zurückgeführt werden, welcher durch die verstärkten Spin-Bahn-Kopplungen des Schwefels hervorgerufen wird. Zusammen mit dem Arbeitskreis physikalische Chemie der Universität Potsdam konnten auch die photophysikalischen Prozesse über die Transienten-Absorptionsspektroskopie (TAS) aufgeklärt werden.
Eine Strategie zur Funktionalisierung der S4-DBD-Farbstoffe am Thioacetalgerüst konnte entwickelt werden. So gelang es Alkohol-, Propargyl-, Azid-, NHS-Ester-, Carbonsäure-, Maleimid- und Tosyl-Gruppen an S4-DBD-Dialdehyden anzubringen.
Erweiternd wurden molekulare Stäbe auf Basis von Schwefel-Oligo-Spiro-Ketalen (SOSKs) untersucht, bei denen Sauerstoff durch Schwefel ersetzt wurde. Hier konnten die Synthesen der löslichkeitsvermittelnden TER-Muffe und auch des Tetrathiapentaerythritols als Grundbaustein deutlich verbessert werden. Aus diesen konnte ein einfaches SOSK-Polymer hergestellt werden. Weitere Versuche zum Aufbau eines Stabes müssen aber noch untersucht werden. Um einen S-OSK-Stab aufzubauen hat sich dabei die Dithiocarbonat-Gruppe in ersten Versuchen als potenzielle geeignete Schutzgruppe für das Tetrathiapentaerythritol herausgestellt.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Digital transformation (DT) has not only been a major challenge in recent years, it is also supposed to continue to enormously impact our society and economy in the forthcoming decade. On the one hand, digital technologies have emerged, diffusing and determining our private and professional lives. On the other hand, digital platforms have leveraged the potentials of digital technologies to provide new business models. These dynamics have a massive effect on individuals, companies, and entire ecosystems. Digital technologies and platforms have changed the way persons consume or interact with each other. Moreover, they offer companies new opportunities to conduct their business in terms of value creation (e.g., business processes), value proposition (e.g., business models), or customer interaction (e.g., communication channels), i.e., the three dimensions of DT. However, they also can become a threat for a company's competitiveness or even survival. Eventually, the emergence, diffusion, and employment of digital technologies and platforms bear the potential to transform entire markets and ecosystems.
Against this background, IS research has explored and theorized the phenomena in the context of DT in the past decade, but not to its full extent. This is not surprising, given the complexity and pervasiveness of DT, which still requires far more research to further understand DT with its interdependencies in its entirety and in greater detail, particularly through the IS perspective at the confluence of technology, economy, and society. Consequently, the IS research discipline has determined and emphasized several relevant research gaps for exploring and understanding DT, including empirical data, theories as well as knowledge of the dynamic and transformative capabilities of digital technologies and platforms for both organizations and entire industries.
Hence, this thesis aims to address these research gaps on the IS research agenda and consists of two streams. The first stream of this thesis includes four papers that investigate the impact of digital technologies on organizations. In particular, these papers study the effects of new technologies on firms (paper II.1) and their innovative capabilities (II.2), the nature and characteristics of data-driven business models (II.3), and current developments in research and practice regarding on-demand healthcare (II.4). Consequently, the papers provide novel insights on the dynamic capabilities of digital technologies along the three dimensions of DT. Furthermore, they offer companies some opportunities to systematically explore, employ, and evaluate digital technologies to modify or redesign their organizations or business models.
The second stream comprises three papers that explore and theorize the impact of digital platforms on traditional companies, markets, and the economy and society at large. At this, paper III.1 examines the implications for the business of traditional insurance companies through the emergence and diffusion of multi-sided platforms, particularly in terms of value creation, value proposition, and customer interaction. Paper III.2 approaches the platform impact more holistically and investigates how the ongoing digital transformation and "platformization" in healthcare lastingly transform value creation in the healthcare market. Paper III.3 moves on from the level of single businesses or markets to the regulatory problems that result from the platform economy for economy and society, and proposes appropriate regulatory approaches for addressing these problems. Hence, these papers bring new insights on the table about the transformative capabilities of digital platforms for incumbent companies in particular and entire ecosystems in general.
Altogether, this thesis contributes to the understanding of the impact of DT on organizations and markets through the conduction of multiple-case study analyses that are systematically reflected with the current state of the art in research. On this empirical basis, the thesis also provides conceptual models, taxonomies, and frameworks that help describing, explaining, or predicting the impact of digital technologies and digital platforms on companies, markets and the economy or society at large from an interdisciplinary viewpoint.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
The development of novel programmable materials aiming to control friction in real-time holds potential to facilitate innovative lubrication solutions for reducing wear and energy losses. This work describes the integration of light-responsiveness into two lubricating materials, silicon oils and polymer brush surfaces.
The first part focusses on the assessment on 9-anthracene ester-terminated polydimethylsiloxanes (PDMS-A) and, in particular, on the variability of rheological properties and the implications that arise with UV-light as external trigger. The applied rheometer setup contains an UV-transparent quartz-plate, which enables radiation and simultaneous measurement of the dynamic moduli. UV-A radiation (354 nm) triggers the cycloaddition reaction between the terminal functionalities of linear PDMS, resulting in chain extension. The newly-formed anthracene dimers cleave by UV-C radiation (254 nm) or at elevated temperatures (T > 130 °C). The sequential UV-A radiation and thermal reprogramming over three cycles demonstrate high conversions and reproducible programming of rheological properties. In contrast, the photochemical back reaction by UV-C is incomplete and can only partially restore the initial rheological properties. The dynamic moduli increase with each cycle in photochemical programming, presumably resulting from a chain segment re-arrangement as a result of the repeated partial photocleavage and subsequent chain length-dependent dimerization. In addition, long periods of radiation cause photooxidative degradation, which damages photo-responsive functions and consequently reduces the programming range. The absence of oxygen, however, reduces undesired side reactions. Anthracene-functionalized PDMS and native PDMS mix depending on the anthracene ester content and chain length, respectively, and allow fine-tuning of programmable rheological properties. The work shows the influence of mixing conditions during the photoprogramming step on the rheological properties, indicating that material property gradients induced by light attenuation along the beam have to be considered. Accordingly, thin lubricant films are suggested as potential application for light-programmable silicon fluids.
The second part compares strategies for the grafting of spiropyran (SP) containing copolymer brushes from Si wafers and evaluates the light-responsiveness of the surfaces. Pre-experiments on the kinetics of the thermally initiated RAFT copolymerization of 2-hydroxyethyl acrylate (HEA) and spiropyran acrylate (SPA) in solution show, first, a strong retardation by SP and, second, the dependence of SPA polymerization on light. Surprisingly, the copolymerization of SPA is inhibited in the dark. These findings contribute to improve the synthesis of polar, spiropyran-containing copolymers. The comparison between initiator systems for the grafting-from approach indicates PET-RAFT superior to thermally initiated RAFT, suggesting a more efficient initiation of surface-bound CTA by light. Surface-initiated polymerization via PET-RAFT with an initiator system of EosinY (EoY) and ascorbic acid (AscA) facilitates copolymer synthesis from HEA and 5-25 mol% SPA. The resulting polymer film with a thickness of a few nanometers was detected by atomic force microscopy (AFM) and ellipsometry. Water contact angle (CA) measurements demonstrate photo-switchable surface polarity, which is attributed to the photoisomerization between non-polar spiropyran and zwitterionic merocyanine isomer. Furthermore, the obtained spiropyran brushes show potential for further studies on light-programmable properties. In this context, it would be interesting to investigate whether swollen spiropyran-containing polymers change their configuration and thus their film thickness under the influence of light. In addition, further experiments using an AFM or microtribometer should evaluate whether light-programmable solvation enables a change in frictional properties between polymer brush surfaces.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
This paper examines the function that cross-cultural competence (3C) has for NATO in a military context while focusing on two member states and their armed forces: the United States and Germany. Three dimensions were established to analyze 3C internally and externally: dimension A, dealing with 3C within the military organization; dimension B, focusing on 3C in a coalition environment/multicultural NATO contingent, for example while on a mission/training exercise abroad; and dimension C, covering 3C and NATO missions abroad with regard to interaction with the local population.
When developing the research design, the cultural studies-based theory of hegemony constructed by Antonio Gramsci was applied to a comprehensive document analysis of 3C coursework and regulations as well as official documents in order to establish a typification for cross-cultural competence.
As the result, 3C could be categorized as Type I – Ethical 3C, Type II – Hegemonic 3C, and Type III – Dominant 3C. Attributes were assigned according to each type. To validate the established typification, qualitative surveys were conducted with NATO (ACT), the U.S. Armed Forces (USCENTCOM), and the German Armed Forces (BMVg). These interviews validated the typification and revealed a varied approach to 3C in the established dimensions. It became evident that dimensions A and B indicated a prevalence of Type III, which greatly impacts the work atmosphere and effectiveness for NATO (ACT). In contrast, dimension C revealed the use of postcolonial mechanisms by NATO forces, such as applying one’s value systems to other cultures and having the appearance of an occupying force when 3C is not applied (Type I-II). In general, the function of each 3C type in the various dimensions could be determined.
In addition, a comparative study of the document analysis and the qualitative surveys resulted in a canon for culture-general skills. Regarding the determined lack of coherence in 3C correlating with a demonstrably negative impact on effectiveness and efficiency as well as interoperability, a NATO standard in the form of a standardization agreement (STANAG) was suggested based on the aforementioned findings, with a focus on: empathy, cross-cultural awareness, communication skills (including active listening), flexibility and adaptability, and interest. Moreover, tolerance of ambiguity and teachability, patience, observation skills, and perspective-taking could be considered significant. Suspending judgment and respect are also relevant skills here.
At the same time, the document analysis also revealed a lack of coherency and consistency in 3C education and interorganizational alignment. In particular, the documents examined for the U.S. Forces indicated divergent approaches. Furthermore, the interview analysis disclosed a large discrepancy in part between doctrine and actual implementation with regard to the NATO Forces.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
Fiber-based microfluidics has undergone many innovative developments in recent years, with exciting examples of portable, cost-effective and easy-to-use detection systems already being used in diagnostic and analytical applications. In water samples, Legionella are a serious risk as human pathogens. Infection occurs through inhalation of aerosols containing Legionella cells and can cause severe pneumonia and may even be fatal. In case of Legionella contamination of water-bearing systems or Legionella infection, it is essential to find the source of the contamination as quickly as possible to prevent further infections. In drinking, industrial and wastewater monitoring, the culture-based method is still the most commonly used technique to detect Legionella contamination. In order to improve the laboratory-dependent determination, the long analysis times of 10-14 days as well as the inaccuracy of the measured values in colony forming units (CFU), new innovative ideas are needed. In all areas of application, for example in public, commercial or private facilities, rapid and precise analysis is required, ideally on site.
In this PhD thesis, all necessary single steps for a rapid DNA-based detection of Legionella were developed and characterized on a fiber-based miniaturized platform. In the first step, a fast, simple and device-independent chemical lysis of the bacteria and extraction of genomic DNA was established. Subsequently, different materials were investigated with respect to their non-specific DNA retention. Glass fiber filters proved to be particularly suitable, as they allow recovery of the DNA sample from the fiber material in combination with dedicated buffers and exhibit low autofluorescence, which was important for fluorescence-based readout.
A fiber-based electrophoresis unit was developed to migrate different oligonucleotides within a fiber matrix by application of an electric field. A particular advantage over lateral flow assays is the targeted movement, even after the fiber is saturated with liquid. For this purpose, the entire process of fiber selection, fiber chip patterning, combination with printed electrodes, and testing of retention and migration of different DNA samples (single-stranded, double-stranded and genomic DNA) was performed. DNA could be pulled across the fiber chip in an electric field of 24 V/cm within 5 minutes, remained intact and could be used for subsequent detection assays e.g., polymerase chain reaction (PCR) or fluorescence in situ hybridization (FISH). Fiber electrophoresis could also be used to separate DNA from other components e.g., proteins or cell lysates or to pull DNA through multiple layers of the glass microfiber. In this way, different fragments experienced a moderate, size-dependent separation. Furthermore, this arrangement offers the possibility that different detection reactions could take place in different layers at a later time. Electric current and potential measurements were collected to investigate the local distribution of the sample during migration. While an increase in current signal at high concentrations indicated the presence of DNA samples, initial experiments with methylene blue stained DNA showed a temporal sequence of signals, indicating sample migration along the chip.
For the specific detection of a Legionella DNA, a FISH-based detection with a molecular beacon probe was tested on the glass microfiber. A specific region within the 16S rRNA gene of Legionella spp. served as a target. For this detection, suitable reaction conditions and a readout unit had to be set up first. Subsequently, the sensitivity of the probe was tested with the reverse complementary target sequence and the specificity with several DNA fragments that differed from the target sequence. Compared to other DNA sequences of similar length also found in Legionella pneumophila, only the target DNA was specifically detected on the glass microfiber. If a single base exchange is present or if two bases are changed, the probe can no longer distinguish between the DNA targets and non-targets. An analysis with this specificity can be achieved with other methods such as melting point determination, as was also briefly indicated here. The molecular beacon probe could be dried on the glass microfiber and stored at room temperature for more than three months, after which it was still capable of detecting the target sequence. Finally, the feasibility of fiber-based FISH detection for genomic Legionella DNA was tested. Without further processing, the probe was unable to detect its target sequence in the complex genomic DNA. However, after selecting and application of appropriate restriction enzymes, specific detection of Legionella DNA against other aquatic pathogens with similar fragment patterns as Acinetobacter haemolyticus was possible.
Humankind and their environment need to be protected from the harmful effects of spent nuclear fuel, and therefore disposal in deep geological formations is favoured worldwide. Suitability of potential host rocks is evaluated, among others, by the retention capacity with respect to radionuclides. Safety assessments are based on the quantification of radionuclide migration lengths with numerical simulations as experiments cannot cover the required temporal (1 Ma) and spatial scales (>100 m).
Aim of the present thesis is to assess the migration of uranium, a geochemically complex radionuclide, in the potential host rock Opalinus Clay. Radionuclide migration in clay formations is governed by diffusion due to their low permeability and retarded by sorption. Both processes highly depend on pore water geochemistry and mineralogy that vary between different facies. Diffusion is quantified with the single-component (SC) approach using one diffusion coefficient for all species and the process-based multi-component (MC) option. With this, each species is assigned its own diffusion coefficient and the interaction with the diffuse double layer is taken into account. Sorption is integrated via a bottom-up approach using mechanistic surface complexation models and cation exchange. Therefore, reactive transport simulations are conducted with the geochemical code PHREEQC to quantify uranium migration, i.e. diffusion and sorption, as a function of mineralogical and geochemical heterogeneities on the host rock scale.
Sorption processes are facies dependent. Migration lengths vary between the Opalinus Clay facies by up to 10 m. Thereby, the geochemistry of the pore water, in particular the partial pressure of carbon dioxide (pCO2), is more decisive for the sorption capacity than the amount of clay minerals. Nevertheless, higher clay mineral quantities compensate geochemical variations. Consequently, sorption processes must be quantified as a function of pore water geochemistry in contact with the mineral assemblage.
Uranium diffusion in the Opalinus Clay is facies independent. Speciation is dominated by aqueous ternary complexes of U(VI) with calcium and carbonate. Differences in the migration lengths between SC and MC diffusion are with +/-5 m negligible. Further, the application of the MC approach highly depends on the quality and availability of the underlying data. Therefore, diffusion processes can be adequately quantified with the SC approach using experimentally determined diffusion coefficients.
The hydrogeological system governs pore water geochemistry within the formation rather than the mineralogy. Diffusive exchange with the adjacent aquifers established geochemical gradients over geological time scales that can enhance migration by up to 25 m. Consequently, uranium sorption processes must be quantified following the identified priority: pCO2 > hydrogeology > mineralogy.
The presented research provides a workflow and orientation for other potential disposal sites with similar pore water geochemistry due to the identified mechanisms and dependencies. With a maximum migration length of 70 m, the retention capacity of the Opalinus Clay with respect to uranium is sufficient to fulfill the German legal minimum requirement of a thickness of at least 100 m.
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
Die vorliegende Arbeit vertritt die These, dass Hegels Wissenschaft der Logik mit einer Konzeption von Absolutheit Ernst zu machen versucht, nach der es kein Außerhalb des Absoluten geben kann. Dies macht sich bereits im Anfang der Logik bemerkbar: Wenn es nichts außerhalb des Absoluten geben kann, dann darf auch der Anfang nicht außerhalb des Absoluten sein. Folglich kann der Anfang nur mit dem Absoluten gemacht werden. Das Setzen des Anfangs als absolut ist aber gleichzeitig ein Testen des Anfangs auf seine Absolutheit. Diese Prüfung kann der Anfang nicht bestehen. Denn es liegt im Wesen eines Anfangs, nur Anfang und nicht das Ganze und somit nicht das Absolute zu sein. Der Anfang ist am weitesten davon entfernt, das Ganze zu sein, und muss folglich als das Nicht-Absoluteste innerhalb der Logik betrachtet werden. Also ist er beides: Er ist ein Anfang mit dem Absoluten und er ist ein Anfang mit dem Nicht-Absolutesten. Die Logik widerspricht sich bereits in ihrem Anfang. Von diesem Widerspruch muss sie sich befreien. Diese Befreiung treibt den Gang vom Anfang fort. Dies erzeugt den Fortgang der Logik. Die anfängliche Bestimmung hebt sich auf und geht in ihre Folgebestimmung über. Die Folgebestimmung wird ihrerseits absolut gesetzt, kann dieser Setzung aber ebenfalls nicht gerecht werden und hebt sich in ihre Folgebestimmung auf. Eine jede Bestimmung, die auf den Anfang folgt, durchläuft diese Bewegung des Absolutsetzens, Daran-Scheiterns und Sich-Aufhebens, bis – ganz am Ende der Logik – ebendiese Bewegung als dasjenige erkannt wird, was allein vermögend ist, dem Anspruch auf Absolutheit zu genügen. Denn wenn eine jede Bestimmung dieser Bewegung unterworfen ist, dann gibt es kein Außerhalb zu dieser Bewegung. Und also muss sie das gesuchte Absolute sein.
Auf ihrem Weg hin zur wahren Bedeutung des Absoluten kehrt die Logik immer wieder in die Bestimmung ihres Anfangs zurück, um Voraussetzungen einzuholen, die in Zusammenhang mit ihrem Anfang gemacht werden mussten. Für das Einholen dieser Voraussetzungen werden folgende Textstellen von Interesse sein: der Übergang in die Wesenslogik, der Übergang in die Begriffslogik und das Schlusskapitel. Denn auch zuallerletzt, in ihrem Ende kehrt die Logik in ihren Anfang zurück. Entsprechend kann mit Hegel gesagt werden: Das Erste ist auch das Letzte und das Letzte ist auch das Erste.
This thesis is analyzing multiple coordination challenges which arise with the digital transformation of public administration in federal systems, illustrated by four case studies in Germany. I make various observations within a multi-level system and provide an in-depth analysis. Theoretical explanations from both federalism research and neo-institutionalism are utilized to explain the findings of the empirical driven work. The four articles evince a holistic picture of the German case and elucidate its role as a digital government laggard. Their foci range from macro, over meso to micro level of public administration, differentiating between the governance and the tool dimension of digital government.
The first article shows how multi-level negotiations lead to expensive but eventually satisfying solutions for the involved actors, creating a subtle balance between centralization and decentralization. The second article identifies legal, technical, and organizational barriers for cross-organizational service provision, highlighting the importance of inter-organizational and inter-disciplinary exchange and both a common language and trust. Institutional change and its effects on the micro level, on citizens and the employees in local one-stop shops, mark the focus of the third article, bridging the gap between reforms and the administrative reality on the local level. The fourth article looks at the citizens’ perspective on digital government reforms, their expectations, use and satisfaction. In this vein, this thesis provides a detailed account of the importance of understanding the digital divide and therefore the necessity of reaching out to different recipients of digital government reforms. I draw conclusions from the factors identified as causes for Germany’s shortcomings for other federal systems where feasible and derive reform potential therefrom. This allows to gain a new perspective on digital government and its coordination challenges in federal contexts.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
Selbstwirksamkeitserwartungen von Lehramtsstudierenden im Kontext von schulpraktischen Erfahrungen
(2022)
Selbstwirksamkeitserwartungen spielen eine wichtige Rolle für das professionelle Verhalten von Lehrkräften im Unterricht (Tschannen-Moran et al., 1998) sowie für die Leistungen und das Verhalten der Schülerinnen und Schüler (Mojavezi & Tamiz, 2012). Selbstwirksamkeitserwartungen von Lehrkräften sind definiert als die Überzeugung von Lehrkräften, dass sie in der Lage sind, bestimmte Ziele in einer spezifischen Situation zu erreichen (Dellinger et al., 2008; Tschannen-Moran & Hoy, 2001). Aufgrund der bedeutenden Rolle der Lehrkräfte im Bildungssystem und in der Gesellschaft ist es wichtig, das Wohlbefinden, die Produktivität und die Wirksamkeit von Lehrkräften zu fördern (Kasalak & Dagyar, 2020). Empirische Befunde unterstreichen die positiven Effekte von Selbstwirksamkeitserwartungen bei Lehrkräften auf ihr Wohlbefinden (Perera & John, 2020) und auf das Lernen sowie die Leistungen der Schülerinnen und Schüler (Zee & Koomen, 2016). Dabei mangelt es jedoch an empirischer Forschung, die die Bedeutung von Selbstwirksamkeitserwartungen bei Lehramtsstudierende in der Lehrkräftebildung untersucht (Yurekli et al., 2020), insbesondere während schulpraktischen Ausbildungsphasen. Ausgehend von der Bedeutung eigener Unterrichtserfahrungen, die als mastery experience, d.h. als stärkste Quelle von Selbstwirksamkeit für Lehramtsstudierende, beschrieben wurden (Pfitzner-Eden, 2016b), werden in dieser Dissertation Praxiserfahrungen als Quelle von Selbstwirksamkeit von Lehramtsstudierenden und die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während der Lehrkräfteausbildung untersucht. Studie 1 konzentriert sich daher auf die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während kurzer praktischer Unterrichtserfahrungen im Vergleich zur Online-Lehre ohne Unterrichtserfahrung. Aufgrund inkonsistenter Befunde zu den wechselseitigen Beziehungen zwischen den Selbstwirksamkeitserwartungen von Lehrkräften und ihrem Unterrichtsverhalten (Holzberger et al., 2013; Lazarides et al., 2022) wurde in Studie 2 der Zusammenhang zwischen der Selbstwirksamkeit von Lehramtsstudierenden und ihrem Unterrichtsverhalten während des Lehramtsstudiums untersucht. Da Feedback als verbale Überzeugung (verbal persuasion) dienen kann und somit eine wichtige Quelle für Selbstwirksamkeitserwartungen ist, die das Gefühl der Kompetenz stärkt (Pfitzner-Eden, 2016b), fokussiert Studie 2 den Zusammenhang zwischen der Veränderung der Selbstwirksamkeit von Lehramtsstudierenden und der wahrgenommenen Qualität des Peer-Feedbacks im Kontext kurzer schulpraktischer Erfahrungen während des Lehramtsstudiums. Darüber hinaus ist es für die Untersuchung der Veränderung von Selbstwirksamkeit bei Lehramtsstudierenden wichtig, individuelle Persönlichkeitsaspekte und spezifische Bedingungen der Lernumgebung in der Lehrkräftebildung zu untersuchen (Bach, 2022). Ausgehend von der Annahme, dass die Unterstützung von Reflexionsprozessen in der Lehrkräftebildung (Menon & Azam, 2021) und der Einsatz innovativer Lernsettings wie VR-Videos (Nissim & Weissblueth, 2017) die Entwicklung von Selbstwirksamkeitserwartungen von Lehramtsstudierenden fördern, werden in Studie 3 und Studie 4 Reflexionsprozesse bei Lehramtsstudierenden in Bezug auf ihre eigenen Unterrichtserfahrungen bzw. stellvertretenden Unterrichtserfahrungen anderer untersucht. Vor dem Hintergrund inkonsistenter Befunde und fehlender empirischer Forschung zu den Zusammenhängen zwischen Selbstwirksamkeit von Lehramtsstudierenden und verschiedenen Faktoren, die das Lernumfeld oder persönliche Merkmale betreffen, sind weitere empirische Studien erforderlich, die verschiedene Quellen und Zusammenhänge der Selbstwirksamkeitserwartungen von Lehramtsstudierenden während des Lehramtsstudiums untersuchen. In diesem Zusammenhang wird in der vorliegenden Dissertation der Frage nachgegangen, welche individuellen Merkmale und Lernumgebungen die Selbstwirksamkeit von Lehramtsstudierenden – insbesondere während kurzer schulpraktischer Phasen im Lehramtsstudium fördern können. Darüber hinaus schließt die Dissertation mit der Diskussion der Ergebnisse aus den vier Teilstudien ab, indem Stärken und Schwächen jeder Studie gesamtheitlich in den Blick genommen werden. Abschließend werden Limitationen und Implikationen für die weitere Forschung und die Praxis diskutiert.
Growth differentiation factor 15 (GDF15) is a stress-induced cytokine secreted into the circulation by a number of tissues under different pathological conditions such as cardiovascular disease, cancer or mitochondrial dysfunction, among others. While GDF15 signaling through its recently identified hindbrain-specific receptor GDNF family receptor alpha-like (GFRAL) has been proposed to be involved in the metabolic stress response, its endocrine role under chronic stress conditions is still poorly understood. Mitochondrial dysfunction is characterized by the impairment of oxidative phosphorylation (OXPHOS), leading to inefficient functioning of mitochondria and consequently, to mitochondrial stress. Importantly, mitochondrial dysfunction is among the pathologies to most robustly induce GDF15 as a cytokine in the circulation.
The overall aim of this thesis was to elucidate the role of the GDF15-GFRAL pathway under mitochondrial stress conditions. For this purpose, a mouse model of skeletal muscle-specific mitochondrial stress achieved by ectopic expression of uncoupling protein 1 (UCP1), the HSA-Ucp1-transgenic (TG) mouse, was employed. As a consequence of mitochondrial stress, TG mice display a metabolic remodeling consisting of a lean phenotype, an improved glucose metabolism, an increased metabolic flexibility and a metabolic activation of white adipose tissue.
Making use of TG mice crossed with whole body Gdf15-knockout (GdKO) and Gfral-knockout (GfKO) mouse models, this thesis demonstrates that skeletal muscle mitochondrial stress induces the integrated stress response (ISR) and GDF15 in skeletal muscle, which is released into the circulation as a myokine (muscle-induced cytokine) in a circadian manner. Further, this work identifies GDF15-GFRAL signaling to be responsible for the systemic metabolic remodeling elicited by mitochondrial stress in TG mice. Moreover, this study reveals a daytime-restricted anorexia induced by the GDF15-GFRAL axis under muscle mitochondrial stress, which is, mechanistically, mediated through the induction of hypothalamic corticotropin releasing hormone (CRH). Finally, this work elucidates a so far unknown physiological outcome of the GDF15-GFRAL pathway: the induction of anxiety-like behavior.
In conclusion, this study uncovers a muscle-brain crosstalk under skeletal muscle mitochondrial stress conditions through the induction of GDF15 as a myokine that signals through the hindbrain-specific GFRAL receptor to elicit a stress response leading to metabolic remodeling and modulation of ingestive- and anxiety-like behavior.
Flares are magnetically driven explosions that occur in the atmospheres of all main sequence stars that possess an outer convection zone. Flaring activity is rooted in the magnetic dynamo that operates deep in the stellar interior, propagates through all layers of the atmosphere from the corona to the photosphere, and emits electromagnetic radiation from radio bands to X-ray. Eventually, this radiation, and associated eruptions of energetic particles, are ejected out into interplanetary space, where they impact planetary atmospheres, and dominate the space weather environments of young star-planet systems.
Thanks to the Kepler and the Transit Exoplanet Survey Satellite (TESS) missions, flare observations have become accessible for millions of stars and star-planet systems. The goal of this thesis is to use these flares as multifaceted messengers to understand stellar magnetism across the main sequence, investigate planetary habitability, and explore how close-in planets can affect the host star.
Using space based observations obtained by the Kepler/K2 mission, I found that flaring activity declines with stellar age, but this decline crucially depends on stellar mass and rotation. I calibrated the age of the stars in my sample using their membership in open clusters from zero age main sequence to solar age. This allowed me to reveal the rapid transition from an active, saturated flaring state to a more quiescent, inactive flaring behavior in early M dwarfs at about 600-800 Myr. This result is an important observational constraint on stellar activity evolution that I was able to de-bias using open clusters as an activity-independent age indicator.
The TESS mission quickly superseded Kepler and K2 as the main source of flares in low mass M dwarfs. Using TESS 2-minute cadence light curves, I developed a new technique for flare localization and discovered, against the commonly held belief, that flares do not occur uniformly across their stellar surface: In fast rotating fully convective stars, giant flares are preferably located at high latitudes. This bears implications for both our understanding of magnetic field emergence in these stars, and the impact on the exoplanet atmospheres: A planet that orbits in the equatorial plane of its host may be spared from the destructive effects of these poleward emitting flares.
AU Mic is an early M dwarf, and the most actively flaring planet host detected to date. Its innermost companion, AU Mic b is one of the most promising targets for a first observation of flaring star-planet interactions. In these interactions, the planet influences the star, as opposed to space weather, where the planet is always on the receiving side. The effect reflects the properties of the magnetosphere shared by planet and star, as well as the so far inaccessible magnetic properties of planets. In the about 50 days of TESS monitoring data of AU Mic, I searched for statistically robust signs of flaring interactions with AU Mic b as flares that occur in surplus of the star's intrinsic activity. I found the strongest yet still marginal signal in recurring excess flaring in phase with the orbital period of AU Mic b. If it reflects true signal, I estimate that extending the observing time by a factor of 2-3 will yield a statistically significant detection. Well within the reach of future TESS observations, this additional data may bring us closer to robustly detecting this effect than we have ever been.
This thesis demonstrates the immense scientific value of space based, long baseline flare monitoring, and the versatility of flares as a carrier of information about the magnetism of star-planet systems. Many discoveries still lay in wait in the vast archives that Kepler and TESS have produced over the years. Flares are intense spotlights into the magnetic structures in star-planet systems that are otherwise far below our resolution limits. The ongoing TESS mission, and soon PLATO, will further open the door to in-depth understanding of small and dynamic scale magnetic fields on low mass stars, and the space weather environment they effect.
This study explores the identity of the Bene Israel caste from India and its assimilation into Israeli society. The large immigration from India to Israel started in the early 1950s and continued until the early 1970s. Initially, these immigrants struggled hard as they faced many problems such as the language barrier, cultural differences, a new climate, geographical isolation, and racial discrimination. This analysis focuses on the three major aspects of the integration process involving the Bene Israel: economic, socio-cultural and political. The study covers the period from the early fifties to the present.
I will focus on the origin of the Bene Israel, which has evolved after their immigration to Israel; from a Hindu–Muslim lifestyle and customs they integrated into the Jewish life of Israel. Despite its ethnographic nature, this study has theological implications as it is an encounter between Jewish monotheism and Indian polytheism.
All the western scholars who researched the Bene Israel community felt impelled to rely on information received by community members themselves. No written historical evidence recorded Bene Israel culture and origin. Only during the nineteenth century onwards, after the intrusion of western Jewish missionaries, were Jewish books translated into Marathi . Missionary activities among the Bene Israel served as a catalyst for the Bene Israel themselves to investigate their historical past . Haeem Samuel Kehimkar (1830-1908), a Bene Israel teacher, wrote notes on the history of the Bene Israel in India in Marathi in 1897. Brenda Ness wrote in her dissertation:
The results [of the missionary activities] are several works about the community in English and Marathi by Bene-Israel authors which have appeared during the last century. These are, for the most part, not documented; they consist of much theorizing on accepted tradition and tend to be apologetic in nature.
There can be no philosophical explanation or rational justification for an entire community to leave their motherland India, and enter into a process of annihilation of its own free will. I see this as a social and cultural suicide. In craving for a better future in Israel, the Indian Bene Israel community pays an enormously heavy price as a people that are today discarded by the East and disowned by the West: because they chose to become something that they never were and never could be. As it is written, “know where you came from, and where you are going.” A community with an ancient history from a spiritual culture has completely lost its identity and self-esteem.
In concluding this dissertation, I realize the dilemma with which I have confronted the members of the Bene Israel community which I have reviewed after strenuous and constant self-examination. I chose to evolve the diversifications of the younger generations urges towards acceptance, and wish to clarify my intricate analysis of this controversial community. The complexity of living in a Jewish State, where citizens cannot fulfill their basic desires, like matrimony, forced an entire community to conceal their true identity and perjure themselves to blend in, for the sake of national integration. Although scholars accepted their new claims, the skepticism of the rabbinate authorities prevails, and they refuse to marry them to this day, suspecting they are an Indian caste.
Knowledge graphs are structured repositories of knowledge that store facts
about the general world or a particular domain in terms of entities and
their relationships. Owing to the heterogeneity of use cases that are served
by them, there arises a need for the automated construction of domain-
specific knowledge graphs from texts. While there have been many research
efforts towards open information extraction for automated knowledge graph
construction, these techniques do not perform well in domain-specific settings.
Furthermore, regardless of whether they are constructed automatically from
specific texts or based on real-world facts that are constantly evolving, all
knowledge graphs inherently suffer from incompleteness as well as errors in
the information they hold.
This thesis investigates the challenges encountered during knowledge graph
construction and proposes techniques for their curation (a.k.a. refinement)
including the correction of semantic ambiguities and the completion of missing
facts. Firstly, we leverage existing approaches for the automatic construction
of a knowledge graph in the art domain with open information extraction
techniques and analyse their limitations. In particular, we focus on the
challenging task of named entity recognition for artwork titles and show
empirical evidence of performance improvement with our proposed solution
for the generation of annotated training data.
Towards the curation of existing knowledge graphs, we identify the issue of
polysemous relations that represent different semantics based on the context.
Having concrete semantics for relations is important for downstream appli-
cations (e.g. question answering) that are supported by knowledge graphs.
Therefore, we define the novel task of finding fine-grained relation semantics
in knowledge graphs and propose FineGReS, a data-driven technique that
discovers potential sub-relations with fine-grained meaning from existing pol-
ysemous relations. We leverage knowledge representation learning methods
that generate low-dimensional vectors (or embeddings) for knowledge graphs
to capture their semantics and structure. The efficacy and utility of the
proposed technique are demonstrated by comparing it with several baselines
on the entity classification use case.
Further, we explore the semantic representations in knowledge graph embed-
ding models. In the past decade, these models have shown state-of-the-art
results for the task of link prediction in the context of knowledge graph comple-
tion. In view of the popularity and widespread application of the embedding
techniques not only for link prediction but also for different semantic tasks,
this thesis presents a critical analysis of the embeddings by quantitatively
measuring their semantic capabilities. We investigate and discuss the reasons
for the shortcomings of embeddings in terms of the characteristics of the
underlying knowledge graph datasets and the training techniques used by
popular models.
Following up on this, we propose ReasonKGE, a novel method for generating
semantically enriched knowledge graph embeddings by taking into account the
semantics of the facts that are encapsulated by an ontology accompanying the
knowledge graph. With a targeted, reasoning-based method for generating
negative samples during the training of the models, ReasonKGE is able to
not only enhance the link prediction performance, but also reduce the number
of semantically inconsistent predictions made by the resultant embeddings,
thus improving the quality of knowledge graphs.
It is estimated that data scientists spend up to 80% of the time exploring, cleaning, and transforming their data. A major reason for that expenditure is the lack of knowledge about the used data, which are often from different sources and have heterogeneous structures. As a means to describe various properties of data, metadata can help data scientists understand and prepare their data, saving time for innovative and valuable data analytics. However, metadata do not always exist: some data file formats are not capable of storing them; metadata were deleted for privacy concerns; legacy data may have been produced by systems that were not designed to store and handle meta- data. As data are being produced at an unprecedentedly fast pace and stored in diverse formats, manually creating metadata is not only impractical but also error-prone, demanding automatic approaches for metadata detection.
In this thesis, we are focused on detecting metadata in CSV files – a type of plain-text file that, similar to spreadsheets, may contain different types of content at arbitrary positions. We propose a taxonomy of metadata in CSV files and specifically address the discovery of three different metadata: line and cell type, aggregations, and primary keys and foreign keys.
Data are organized in an ad-hoc manner in CSV files, and do not follow a fixed structure, which is assumed by common data processing tools. Detecting the structure of such files is a prerequisite of extracting information from them, which can be addressed by detecting the semantic type, such as header, data, derived, or footnote, of each line or each cell. We propose the supervised- learning approach Strudel to detect the type of lines and cells. CSV files may also include aggregations. An aggregation represents the arithmetic relationship between a numeric cell and a set of other numeric cells. Our proposed AggreCol algorithm is capable of detecting aggregations of five arithmetic functions in CSV files. Note that stylistic features, such as font style and cell background color, do not exist in CSV files. Our proposed algorithms address the respective problems by using only content, contextual, and computational features.
Storing a relational table is also a common usage of CSV files. Primary keys and foreign keys are important metadata for relational databases, which are usually not present for database instances dumped as plain-text files. We propose the HoPF algorithm to holistically detect both constraints in relational databases. Our approach is capable of distinguishing true primary and foreign keys from a great amount of spurious unique column combinations and inclusion dependencies, which can be detected by state-of-the-art data profiling algorithms.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
The ongoing climate change is altering the living conditions for many organisms on this planet at an unprecedented pace. Hence, it is crucial for the survival of species to adapt to these changing conditions. In this dissertation Silene vulgaris is used as a model organism to understand the adaption strategies of widely distributed plant species to the current climate change. Especially plant species that possess a wide geographic range are expected to have a high phenotypic plasticity or to show genetic differentiation in response to the different climate conditions they grow in. However, they are often underrepresented in research.
In the greenhouse experiment presented in this thesis, I examined the phenotypic responses and plasticity in S. vulgaris to estimate its’ adaptation potential. Seeds from 25 wild European populations were collected along a latitudinal gradient and grown in a greenhouse under three different precipitation (65 mm, 75 mm, 90 mm) and two different temperature regimes (18°C, 21°C) that resembled a possible climate change scenario for central Europe. Afterwards different biomass and fecundity-related plant traits were measured.
The treatments significantly influenced the plants but did not reveal a latitudinal difference in response to climate treatments for most plant traits. The number of flowers per individual however, showed a stronger plasticity in northern European populations (e.g., Swedish populations) where numbers decreased more drastically with increased temperature and decreased precipitation.
To gain an even deeper understanding of the adaptation of S. vulgaris to climate change it is also important to reveal the underlying phylogeny of the sampled populations. Therefore, I analysed their population genetic structure through whole genome sequencing via ddRAD.
The sequencing revealed three major genetic clusters in the S. vulgaris populations sampled in Europe: one cluster comprised Southern European populations, one cluster Western European populations and another cluster contained central European populations. A following analysis of experimental trait responses among the clusters to the climate-change scenario showed that the genetic clusters significantly differed in biomass-related traits and in the days to flowering. However, half of the traits showed parallel response patterns to the experimental climate-change scenario.
In addition to the potential geographic and genetic adaptation differences to climate change this dissertation also deals with the response differences between the sexes in S. vulgaris. As a gynodioecious species populations of S. vulgaris consist of female and hermaphrodite
individuals and the sexes can differ in their morphological traits which is known as sexual dimorphism. As climate change is becoming an important factor influencing plant morphology it remains unclear if and how different sexes may respond in sexually dimorphic species. To examine this question the sex of each individual plant was determined during the greenhouse experiment and the measured plant traits were analysed accordingly. In general, hermaphrodites had a higher number of flowers but a lower number of leaves than females. With regards to the climate change treatment, I found that hermaphrodites showed a milder negative response to higher temperatures in the number of flowers produced and in specific leaf area (SLA) compared to females.
Synthesis – The significant treatment response in Silene vulgaris, independent of population origin in most traits suggests a high degree of universal phenotypic plasticity. Also, the three European intraspecific genetic lineages detected showed comparable parallel response patterns in half of the traits suggesting considerable phenotypic plasticity. Hence, plasticity might represent a possible adaptation strategy of this widely distributed species during ongoing and future climatic changes. The results on sexual dimorphism show that females and hermaphrodites are differing mainly in their number of flowers and females are affected more strongly by the experimental climate-change scenario. These results provide a solid knowledge basis on the sexual dimorphism in S. vulgaris under climate change, but further research is needed to determine the long-term impact on the breeding system for the species.
In summary this dissertation provides a comprehensive insight into the adaptation mechanisms and consequences of a widely distributed and gynodioecious plant species and leverages our understanding of the impact of anthropogenic climate change on plants.
Functional traits determine biomass dynamics, coexistence and energetics in plankton food webs
(2022)
Plankton food webs are the basis of marine and limnetic ecosystems. Especially aquatic ecosystems of high biodiversity provide important ecosystem services for humankind as providers of food, coastal protection, climate regulation, and tourism. Understanding the dynamics of biomass and coexistence in these food webs is a first step to understanding the ecosystems. It also lays the foundation for the development of management strategies for the maintenance of the marine and freshwater biodiversity despite anthropogenic influences.
Natural food webs are highly complex, and thus often equally complex methods are needed to analyse and understand them well. Models can help to do so as they depict simplified parts of reality. In the attempt to get a broader understanding of the complex food webs, diverse methods are used to investigate different questions.
In my first project, we compared the energetics of a food chain in two versions of an allometric trophic network model. In particular, we solved the problem of unrealistically high trophic transfer efficiencies (up to 70%) by accounting for both basal respiration and activity respiration, which decreased the trophic transfer efficiency to realistic values of ≤30%. Next in my second project I turned to plankton food webs and especially phytoplankton traits. Investigating a long-term data set from Lake Constance we found evidence for a trade-off between defence and growth rate in this natural phytoplankton community. I continued working with this data set in my third project focusing on ciliates, the main grazer of phytoplankton in spring. Boosted regression trees revealed that temperature and predators have the highest influence on net growth rates of ciliates. We finally investigated in my fourth project a food web model inspired by ciliates to explore the coexistence of plastic competitors and to study the new concept of maladaptive switching, which revealed some drawbacks of plasticity: faster adaptation led to higher maladaptive switching towards undefended phenotypes which reduced autotroph biomass and coexistence and increased consumer biomass.
It became obvious that even well-established models should be critically questioned as it is important not to forget reality on the way to a simplistic model. The results showed furthermore that long-term data sets are necessary as they can help to disentangle complex natural processes. Last, one should keep in mind that the interplay between models and experiments/ field data can deliver fruitful insights about our complex world.
River floods are among the most devastating natural hazards worldwide. As their generation is highly dependent on climatic conditions, their magnitude and frequency are projected to be affected by future climate change. Therefore, it is crucial to study the ways in which a changing climate will, and already has, influenced flood generation, and thereby flood hazard. Additionally, it is important to understand how other human influences - specifically altered land cover - affect flood hazard at the catchment scale.
The ways in which flood generation is influenced by climatic and land cover conditions differ substantially in different regions. The spatial variability of these effects needs to be taken into account by using consistent datasets across large scales as well as applying methods that can reflect this heterogeneity. Therefore, in the first study of this cumulative thesis a complex network approach is used to find 10 clusters of similar flood behavior among 4390 catchments in the conterminous United States. By using a consistent set of 31 hydro-climatological and land cover variables, and training a separate Random Forest model for each of the clusters, the regional controls on flood magnitude trends between 1960-2010 are detected. It is shown that changes in rainfall are the most important drivers of these trends, while they are regionally controlled by land cover conditions.
While climate change is most commonly associated with flood magnitude trends, it has been shown to also influence flood timing. This can lead to trends in the size of the area across which floods occur simultaneously, the flood synchrony scale. The second study is an analysis of data from 3872 European streamflow gauges and shows that flood synchrony scales have increased in Western Europe and decreased in Eastern Europe. These changes are attributed to changes in flood generation, especially a decreasing relevance of snowmelt. Additionally, the analysis shows that both the absolute values and the trends of flood magnitudes and flood synchrony scales are positively correlated. If these trends persist in the future and are not accounted for, the combined increases of flood magnitudes and flood synchrony scales can exceed the capacities of disaster relief organizations and insurers.
Hazard cascades are an additional way through which climate change can influence different aspects of flood hazard. The 2019/2020 wildfires in Australia, which were preceded by an unprecedented drought and extinguished by extreme rainfall that led to local flooding, present an opportunity to study the effects of multiple preceding hazards on flood hazard. All these hazards are individually affected by climate change, additionally complicating the interactions within the cascade. By estimating and analyzing the burn severity, rainfall magnitude, soil erosion and stream turbidity in differently affected tributaries of the Manning River catchment, the third study shows that even low magnitude floods can pose a substantial hazard within a cascade.
This thesis shows that humanity is affecting flood hazard in multiple ways with spatially and temporarily varying consequences, many of which were previously neglected (e.g. flood synchrony scale, hazard cascades). To allow for informed decision making in risk management and climate change adaptation, it will be crucial to study these aspects across the globe and to project their trajectories into the future. The presented methods can depict the complex interactions of different flood drivers and their spatial variability, providing a basis for the assessment of future flood hazard changes. The role of land cover should be considered more in future flood risk modelling and management studies, while holistic, transferable frameworks for hazard cascade assessment will need to be designed.
The Arctic nearshore zone plays a key role in the carbon cycle. Organic-rich sediments get eroded off permafrost affected coastlines and can be directly transferred to the nearshore zone. Permafrost in the Arctic stores a high amount of organic matter and is vulnerable to thermo-erosion, which is expected to increase due to climate change. This will likely result in higher sediment loads in nearshore waters and has the potential to alter local ecosystems by limiting light transmission into the water column, thus limiting primary production to the top-most part of it, and increasing nutrient export from coastal erosion. Greater organic matter input could result in the release of greenhouse gases to the atmosphere. Climate change also acts upon the fluvial system, leading to greater discharge to the nearshore zone. It leads to decreasing sea-ice cover as well, which will both increase wave energy and lengthen the open-water season. Yet, knowledge on these processes and the resulting impact on the nearshore zone is scarce, because access to and instrument deployment in the nearshore zone is challenging.
Remote sensing can alleviate these issues in providing rapid data delivery in otherwise non-accessible areas. However, the waters in the Arctic nearshore zone are optically complex, with multiple influencing factors, such as organic rich suspended sediments, colored dissolved organic matter (cDOM), and phytoplankton. The goal of this dissertation was to use remotely sensed imagery to monitor processes related to turbidity caused by suspended sediments in the Arctic nearshore zone. In-situ measurements of water-leaving reflectance and surface water turbidity were used to calibrate a semi-empirical algorithm which relates turbidity from satellite imagery. Based on this algorithm and ancillary ocean and climate variables, the mechanisms underpinning nearshore turbidity in the Arctic were identified at a resolution not achieved before.
The calibration of the Arctic Nearshore Turbidity Algorithm (ANTA) was based on in-situ measurements from the coastal and inner-shelf waters around Herschel Island Qikiqtaruk (HIQ) in the western Canadian Arctic from the summer seasons 2018 and 2019. It performed better than existing algorithms, developed for global applications, in relating turbidity from remotely sensed imagery. These existing algorithms were lacking validation data from permafrost affected waters, and were thus not able to reflect the complexity of Arctic nearshore waters. The ANTA has a higher sensitivity towards the lowest turbidity values, which is an asset for identifying sediment pathways in the nearshore zone. Its transferability to areas beyond HIQ was successfully demonstrated using turbidity measurements matching satellite image recordings from Adventfjorden, Svalbard. The ANTA is a powerful tool that provides robust turbidity estimations in a variety of Arctic nearshore environments.
Drivers of nearshore turbidity in the Arctic were analyzed by combining ANTA results from the summer season 2019 from HIQ with ocean and climate variables obtained from the weather station at HIQ, the ERA5 reanalysis database, and the Mackenzie River discharge. ERA5 reanalysis data were obtained as domain averages over the Canadian Beaufort Shelf. Nearshore turbidity was linearly correlated to wind speed, significant wave height and wave period. Interestingly, nearshore turbidity was only correlated to wind speed at the shelf, but not to the in-situ measurements from the weather station at HIQ. This shows that nearshore turbidity, albeit being of limited spatial extent, gets influenced by the weather conditions multiple kilometers away, rather than in its direct vicinity. The large influence of wave energy on nearshore turbidity indicates that freshly eroded material off the coast is a major contributor to the nearshore sediment load. This contrasts results from the temperate and tropical oceans, where tides and currents are the major drivers of nearshore turbidity. The Mackenzie River discharge was not identified as a driver of nearshore turbidity in 2019, however, the analysis of 30 years of Landsat archive imagery from 1986 to 2016 suggests a direct link between the prevailing wind direction, which heavily influences the Mackenzie River plume extent, and nearshore turbidity around HIQ. This discrepancy could be caused by the abnormal discharge behavior of the Mackenzie River in 2019.
This dissertation has substantially advanced the understanding of suspended sediment processes in the Arctic nearshore zone and provided new monitoring tools for future studies. The presented results will help to understand the role of the Arctic nearshore zone in the carbon cycle under a changing climate.
Scope: Several studies show that excessive lipid intake can cause hepatic steatosis. To investigate lipotoxicity on cellular level, palmitate (PA) is often used to highly increase lipid droplets (LDs). One way to remove LDs is autophagy, while it is controversially discussed if autophagy is also affected by PA. It is aimed to investigate whether PA-induced LD accumulation can impair autophagy and punicalagin, a natural autophagy inducer from pomegranate, can improve it.
Methods and results: To verify the role of autophagy in LD degradation, HepG2 cells are treated with PA and analyzed for LD and perilipin 2 content in presence of autophagy inducer Torin 1 and inhibitor 3-Methyladenine. PA alone seems to initially induce autophagy-related proteins but impairs autophagic-flux in a time-dependent manner, considering 6 and 24 h PA. To examine whether punicalagin can prevent autophagy impairment, cells are cotreated for 24 h with PA and punicalagin. Results show that punicalagin preserves expression of autophagy-related proteins and autophagic flux, while simultaneously decreasing LDs and perilipin 2.
Conclusion: Data provide new insights into the role of PA-induced excessive LD content on autophagy and suggest autophagy-inducing properties of punicalagin, indicating that punicalagin can be a health-beneficial compound for future research on lipotoxicity in liver.
The availability of commercial 3D printers and matching 3D design software has allowed a wide range of users to create physical prototypes – as long as these objects are not larger than hand size. However, when attempting to create larger, "human-scale" objects, such as furniture, not only are these machines too small, but also the commonly used 3D design software is not equipped to design with forces in mind — since forces increase disproportionately with scale.
In this thesis, we present a series of end-to-end fabrication software systems that support users in creating human-scale objects. They achieve this by providing three main functions that regular "small-scale" 3D printing software does not offer: (1) subdivision of the object into small printable components combined with ready-made objects, (2) editing based on predefined elements sturdy enough for larger scale, i.e., trusses, and (3) functionality for analyzing, detecting, and fixing structural weaknesses. The presented software systems also assist the fabrication process based on either 3D printing or steel welding technology.
The presented systems focus on three levels of engineering challenges: (1) fabricating static load-bearing objects, (2) creating mechanisms that involve motion, such as kinematic installations, and finally (3) designing mechanisms with dynamic repetitive movement where power and energy play an important role.
We demonstrate and verify the versatility of our systems by building and testing human-scale prototypes, ranging from furniture pieces, pavilions, to animatronic installations and playground equipment. We have also shared our system with schools, fablabs, and fabrication enthusiasts, who have successfully created human-scale objects that can withstand with human-scale forces.
Elementary particle physics is a contemporary topic in science that is slowly being integrated into high-school education. These new implementations are challenging teachers’ professional knowledge worldwide. Therefore, physics education research is faced with two important questions, namely, how can particle physics be integrated in high-school physics curricula and how best to support teachers in enhancing their professional knowledge on particle physics. This doctoral research project set up to provide better guidelines for answering these two questions by conducting three studies on high-school particle physics education.
First, an expert concept mapping study was conducted to elicit experts’ expectations on what high-school students should learn about particle physics. Overall, 13 experts in particle physics, computing, and physics education participated in 9 concept mapping rounds. The broad knowledge base of the experts ensured that the final expert concept map covers all major particle physics aspects. Specifically, the final expert concept map includes 180 concepts and examples, connected with 266 links and crosslinks. Among them are also several links to students’ prior knowledge in topics such as mechanics and thermodynamics. The high interconnectedness of the concepts shows possible opportunities for including particle physics as a context for other curricular topics. As such, the resulting expert concept map is showcased as a well-suited tool for teachers to scaffold their instructional practice.
Second, a review of 27 high-school physics curricula was conducted. The review uncovered which concepts related to particle physics can be identified in most curricula. Each curriculum was reviewed by two reviewers that followed a codebook with 60 concepts related to particle physics. The analysis showed that most curricula mention cosmology, elementary particles, and charges, all of which are considered theoretical particle physics concepts. None of the experimental particle physics concepts appeared in more than half of the reviewed curricula. Additional analysis was done on two curricular subsets, namely curricula with and curricula without an explicit particle physics chapter. Curricula with an explicit particle physics chapter mention several additional explicit particle physics concepts, namely the Standard Model of particle physics, fundamental interactions, antimatter research, and particle accelerators. The latter is an example of experimental particle physics concepts. Additionally, the analysis revealed that, overall, most curricula include Nature of Science and history of physics, albeit both are typically used as context or as a tool for teaching, respectively.
Third, a Delphi study was conducted to investigate stakeholders’ expectations regarding what teachers should learn in particle physics professional development programmes. Over 100 stakeholders from 41 countries represented four stakeholder groups, namely physics education researchers, research scientists, government representatives, and high-school teachers. The study resulted in a ranked list of the 13 most important topics to be included in particle physics professional development programmes. The highest-ranked topics are cosmology, the Standard Model, and real-life applications of particle physics. All stakeholder groups agreed on the overall ranking of the topics. While the highest-ranked topics are again more theoretical, stakeholders also expect teachers to learn about experimental particle physics topics, which are ranked as medium importance topics.
The three studies addressed two research aims of this doctoral project. The first research aim was to explore to what extent particle physics is featured in high-school physics curricula. The comparison of the outcomes of the curricular review and the expert concept map showed that curricula cover significantly less than what experts expect high-school students to learn about particle physics. For example, most curricula do not include concepts that could be classified as experimental particle physics. However, the strong connections between the different concept show that experimental particle physics can be used as context for theoretical particle physics concepts, Nature of Science, and other curricular topics. In doing so, particle physics can be introduced in classrooms even though it is not (yet) explicitly mentioned in the respective curriculum.
The second research aim was to identify which aspects of content knowledge teachers are expected to learn about particle physics. The comparison of the Delphi study results to the outcomes of the curricular review and the expert concept map showed that stakeholders generally expect teachers to enhance their school knowledge as defined by the curricula. Furthermore, teachers are also expected to enhance their deeper school knowledge by learning how to connect concepts from their school knowledge to other concepts in particle physics and beyond. As such, professional development programmes that focus on enhancing teachers’ school knowledge and deeper school knowledge best support teachers in building relevant context in their instruction.
Overall, this doctoral research project reviewed the current state of high-school particle physics education and provided guidelines for future enhancements of the particle physics content in high-school student and teacher education. The outcomes of the project support further implementations of particle physics in high-school education both as explicit content and as context for other curricular topics. Furthermore, the mixed-methods approach and the outcomes of this research project lead to several implications for professional development programmes and science education research, that are discussed in the final chapters of this dissertation.
Global heat adaptation among urban populations and its evolution under different climate futures
(2022)
Heat and increasing ambient temperatures under climate change represent a serious threat to human health in cities. Heat exposure has been studied extensively at a global scale. Studies comparing a defined temperature threshold with the future daytime temperature during a certain period of time, had concluded an increase in threat to human health. Such findings however do not explicitly account for possible changes in future human heat adaptation and might even overestimate heat exposure. Thus, heat adaptation and its development is still unclear. Human heat adaptation refers to the local temperature to which populations are adjusted to. It can be inferred from the lowest point of the U- or V-shaped heat-mortality relationship (HMR), the Minimum Mortality Temperature (MMT). While epidemiological studies inform on the MMT at the city scale for case studies, a general model applicable at the global scale to infer on temporal change in MMTs had not yet been realised. The conventional approach depends on data availability, their robustness, and on the access to daily mortality records at the city scale. Thorough analysis however must account for future changes in the MMT as heat adaptation happens partially passively. Human heat adaptation consists of two aspects: (1) the intensity of the heat hazard that is still tolerated by human populations, meaning the heat burden they can bear and (2) the wealth-induced technological, social and behavioural measures that can be employed to avoid heat exposure. The objective of this thesis is to investigate and quantify human heat adaptation among urban populations at a global scale under the current climate and to project future adaptation under climate change until the end of the century. To date, this has not yet been accomplished. The evaluation of global heat adaptation among urban populations and its evolution under climate change comprises three levels of analysis. First, using the example of Germany, the MMT is calculated at the city level by applying the conventional method. Second, this thesis compiles a data pool of 400 urban MMTs to develop and train a new model capable of estimating MMTs on the basis of physical and socio-economic city characteristics using multivariate non-linear multivariate regression. The MMT is successfully described as a function of the current climate, the topography and the socio-economic standard, independently of daily mortality data for cities around the world. The city-specific MMT estimates represents a measure of human heat adaptation among the urban population. In a final third analysis, the model to derive human heat adaptation was adjusted to be driven by projected climate and socio-economic variables for the future. This allowed for estimation of the MMT and its change for 3 820 cities worldwide for different combinations of climate trajectories and socio-economic pathways until 2100. The knowledge on the evolution of heat adaptation in the future is a novelty as mostly heat exposure and its future development had been researched. In this work, changes in heat adaptation and exposure were analysed jointly. A wide range of possible health-related outcomes up to 2100 was the result, of which two scenarios with the highest socio-economic developments but opposing strong warming levels were highlighted for comparison. Strong economic growth based upon fossil fuel exploitation is associated with a high gain in heat adaptation, but may not be able to compensate for the associated negative health effects due to increased heat exposure in 30% to 40% of the cities investigated caused by severe climate change. A slightly less strong, but sustainable growth brings moderate gains in heat adaptation but a lower heat exposure and exposure reductions in 80% to 84% of the cities in terms of frequency (number of days exceeding the MMT) and intensity (magnitude of the MMT exceedance) due to a milder global warming. Choosing a 2 ° C compatible development by 2100 would therefore lower the risk of heat-related mortality at the end of the century. In summary, this thesis makes diverse and multidisciplinary contributions to a deeper understanding of human adaptation to heat under the current and the future climate. It is one of the first studies to carry out a systematic and statistical analysis of urban characteristics which are useful as MMT drivers to establish a generalised model of human heat adaptation, applicable at the global level. A broad range of possible heat-related health options for various future scenarios was shown for the first time. This work is of relevance for the assessment of heat-health impacts in regions where mortality data are not accessible or missing. The results are useful for health care planning at the meso- and macro-level and to urban- and climate change adaptation planning. Lastly, beyond having met the posed objective, this thesis advances research towards a global future impact assessment of heat on human health by providing an alternative method of MMT estimation, that is spatially and temporally flexible in its application.
Weather extremes pose a persistent threat to society on multiple layers. Besides an average of ~37,000 deaths per year, climate-related disasters cause destroyed properties and impaired economic activities, eroding people's livelihoods and prosperity. While global temperature rises – caused by anthropogenic greenhouse gas emissions – the direct impacts of climatic extreme events increase and will further intensify without proper adaptation measures. Additionally, weather extremes do not only have local direct effects. Resulting economic repercussions can propagate either upstream or downstream along trade chains causing indirect effects. One approach to analyze these indirect effects within the complex global supply network is the agent-based model Acclimate. Using and extending this loss-propagation model, I focus in this thesis on three aspects of the relation between weather extremes and economic repercussions.
First, extreme weather events cause direct impacts on local economic performance. I compute daily local direct output loss time series of heat stress, river floods, tropical cyclones, and their consecutive occurrence using (near-future) climate projection ensembles. These regional impacts are estimated based on physical drivers and local productivity distribution. Direct effects of the aforementioned disaster categories are widely heterogeneous concerning regional and temporal distribution. As well, their intensity changes differently under future warming. Focusing on the hurricane-impacted capital, I find that long-term growth losses increase with higher heterogeneity of a shock ensemble.
Second, repercussions are sectorally and regionally distributed via economic ripples within the trading network, causing higher-order effects. I use Acclimate to identify three phases of those economic ripples. Furthermore, I compute indirect impacts and analyze overall regional and global production and consumption changes. Regarding heat stress, global consumer losses double while direct output losses increase by a factor 1.5 between 2000 – 2039. In my research I identify the effect of economic ripple resonance and introduce it to climate impact research. This effect occurs if economic ripples of consecutive disasters overlap, which increases economic responses such as an enhancement of consumption losses. These loss enhancements can even be more amplified with increasing direct output losses, e.g. caused by climate crises.
Transport disruptions can cause economic repercussions as well. For this, I extend the model Acclimate with a geographical transportation route and expand the decision horizon of economic agents. Using this, I show that policy-induced sudden trade restrictions (e.g. a no-deal Brexit) can significantly reduce the longer-term economic prosperity of affected regions. Analyses of transportation disruptions in typhoon seasons indicate that severely affected regions must reduce production as demand falls during a storm. Substituting suppliers may compensate for fluctuations at the beginning of the storm, which fails for prolonged disruptions.
Third, possible coping mechanisms and adaptation strategies arise from direct and indirect economic responses to weather extremes. Analyzing annual trade changes due to typhoon-induced transport disruptions depict that overall exports rise. This trade resilience increases with higher network node diversification. Further, my research shows that a basic insurance scheme may diminish hurricane-induced long-term growth losses due to faster reconstruction in disasters aftermaths. I find that insurance coverage could be an economically reasonable coping scheme towards higher losses caused by the climate crisis. Indirect effects within the global economic network from weather extremes indicate further adaptation possibilities. For one, diversifying linkages reduce the hazard of sharp price increases. Next to this, close economic interconnections with regions that do not share the same extreme weather season can be economically beneficial in the medium run. Furthermore, economic ripple resonance effects should be considered while computing costs. Overall, an increase in local adaptation measures reduces economic ripples within the trade network and possible losses elsewhere. In conclusion, adaptation measures are necessary and potential present, but it seems rather not possible to avoid all direct or indirect losses.
As I show in this thesis, dynamical modeling gives valuable insights into how direct and indirect economic impacts arise from different categories of weather extremes. Further, it highlights the importance of resolving individual extremes and reflecting amplifying effects caused by incomplete recovery or consecutive disasters.
Lehrkräftefortbildungen bieten in Deutschland im Rahmen der dritten Phase der Lehrkräftebildung eine zentrale Lerngelegenheit für die Kompetenzentwicklung der Lehr-kräfte (Avalos, 2011; Guskey & Yoon, 2009). In dieser Phase können Lehrkräfte aus einem Angebot an berufsbegleitenden Lerngelegenheiten wählen, die auf die Anpassung und Weiterentwicklung ihrer professionellen Kompetenzen abzielen. Im Rahmen dieser Professionalisierungsmaßnahmen haben Lehrkräfte Gelegenheit zur Reflexion und Weiterentwicklung ihrer Unterrichtspraxis. Deshalb sind Lehrkräftefortbildungen auch für die Entwicklung von Unterrichtsqualität und das Lernen der Schüler:innen bedeutsam (Lipowsky, 2014).
Ergebnisse der Nutzungsforschung zeigen jedoch, dass das Fortbildungsangebot nicht von allen Lehrkräften im vollen Umfang genutzt wird und sich Lehrkräfte in dem Nutzungsumfang dieser beruflichen Lerngelegenheiten unterscheiden (Hoffmann & Richter, 2016). Das hat zur Folge, dass das Wirkpotenzial des Fortbildungsangebots nicht voll ausgeschöpft werden kann. Um die Nutzung von Lehrkräftefortbildungen zu fördern, werden auf unterschiedlichen Ebenen verschiedene Steuerungsinstrumente von Akteuren eingesetzt. Die Frage nach der Steuerungsmöglichkeit im Rahmen der dritten Phase der Lehrkräftebildung ist bislang jedoch weitestgehend unbearbeitet geblieben.
Die vorliegende Arbeit knüpft an die bestehende Forschung zur Lehrkräftefortbildung an und nutzt die theoretische Perspektive der Educational Governance, um im Rahmen von vier Teilstudien der Frage nachzugehen, welche Instrumente und Potenziale der Steue-rung auf den unterschiedlichen Ebenen des Lehrkräftefortbildungssystems bestehen und wie diese durch die verschiedenen politischen und schulischen Akteure umgesetzt werden. Außerdem soll der Frage nachgegangen werden, wie wirksam die genutzten Steuerungsinstrumente im Hinblick auf die Nutzung von Lehrkräftefortbildungen sind. Die übergeordnete Fragestellung wird vor dem Hintergrund eines für das Lehrkräftefortbildungssystem abgelei-teten theoretischen Rahmenmodells in Form eines Mehrebenenmodells bearbeitet, welches als Grundlage für die theoretische Verortung der nachfolgenden empirischen Untersuchungen zur Fortbildungsnutzung und der Wirksamkeit verschiedener Steuerungsinstrumente dient.
Studie I nimmt vor diesem Hintergrund die Ebene der politischen Akteure in den Blick und geht der Frage nach, wie bedeutsam die gesetzliche Fortbildungspflicht für die Fortbildungsbeteiligung von Lehrkräften ist. Hierzu wurde untersucht, inwiefern Zusammenhänge zwischen der Fortbildungsteilnahme von Lehrkräften und der Zugehörigkeit zu Bundesländern mit und ohne konkreter Fortbildungsverpflichtung sowie zu Bundesländern mit und ohne Nachweispflicht absolvierter Fortbildungen bestehen. Dazu wurden Daten aus dem IQB-Ländervergleich 2011 und 2012 sowie dem IQB-Bildungstrend 2015 mittels logistischer und linearer Regressionsmodelle analysiert.
Studie II und Studie III widmen sich den Rahmenbedingungen für schulinterne Fortbildungen. Studie II befasst sich zunächst mit schulformspezifischen Unterschieden bei der Wahl der Fortbildungsthemen. Studie III untersucht das schulinterne Fortbildungsangebot hinsichtlich des Nutzungsumfangs und des Zusammenhangs zwischen Schulmerkmalen und der Nutzung unterschiedlicher Fortbildungsthemen. Darüber hinaus wird ein Vergleich zwi-schen den beiden Angebotsformaten hinsichtlich des jeweiligen Anteils an thematischen Fortbildungsveranstaltungen vorgenommen. Hierzu wurden Daten der Fortbildungsdatenbank des Landes Brandenburg ausgewertet.
Neben der Untersuchung der Fortbildungsteilnahme im Zusammenhang mit administrativen Vorgaben und der Nutzung des schulinternen Fortbildungsangebots auf Schulebene wurde zur Bearbeitung der übergeordneten Forschungsfrage der vorliegenden Arbeit in der Studie IV darüber hinaus eine Untersuchung des Einsatzes von Professionalisierungsmaßnahmen im Rahmen schulischer Personalentwicklung durchgeführt. Durch die qualitative Studie IV wurde ein vertiefender Einblick in die schulische Praxis ermöglicht, um die Kenntnisse aus den quantitativen Studien I bis III zu ergänzen. Im Rahmen einer qualitati-ven Interviewstudie wurde der Frage nachgegangen werden, wie Schulleitungen ausgezeichneter Schulen Personalentwicklung auffassen, welche Informationsquellen sie hierbei mit einbeziehen und welche Maßnahmen sie nutzen und in diesem Sinne Personalentwicklung als ein Instrument für Organisationsentwicklung einsetzen.
Im abschließenden Kapitel der vorliegenden Arbeit werden die zentralen Ergebnisse der durchgeführten Studien zusammenfassend diskutiert. Die Ergebnisse der Arbeit deuten insgesamt darauf hin, dass Akteure auf den jeweiligen Ebenen direkte und indirekete Steuerungsinstrumente mit dem Ziel einsetzen, die Nutzung des zur Verfügung stehenden Angebots zu erhöhen, allerdings erzielen sie mit den genutzten Instrumenten nicht die gewünschte Steuerungswirkung. Da sie weder mit beruflichen Sanktionen noch mit Anreizen verknüpft sind, fehlt es den bestehenden Steuerungsinstrumenten an Durchsetzungsmacht. Außerdem wird das Repertoire an möglichen Steuerungsinstrumenten von den beteiligten Akteuren nicht ausgeschöpft. Die Ergebnisse dieser Arbeit bieten somit die Grundlage für anknüpfende Forschungsarbeiten und geben Anreize für mögliche Implikationen in der Praxis des Fortbildungssystems und der Bildungspolitik.
This cumulative doctoral thesis consists of three empirical studies that examine the role of top-level executives in shaping adverse financial reporting outcomes and other forms of corporate misconduct. The first study examines CEO effects on a wide range of offenses. Using data from enforcement actions by more than 50 U.S. federal agencies, regression re-sults show CEO effects on the likelihood, frequency, and severity of corporate misconduct. The findings hold for financial, labor-related, and environmental offenses; however, CEO effects are more pronounced for non-financial misconduct. Further results show a positive relation between CEO ability and non-financial misconduct, but no relation with financial misconduct, suggesting that higher CEO ability can have adverse consequences for employee welfare and society and public health. The second study focuses on CEO and CFO effects on financial misreporting. Using data on restatements and public enforcement actions, regression results show that the incremental effect of CFOs is economically larger than that of CEOs. This greater economic impact of CFOs is particularly pronounced for fraudulent misreporting. The findings remain consistent across different samples, methods, misreporting measures, and specification choices for the underlying conceptual mechanism, highlighting the important role of the CFO as a key player in the beyond-GAAP setting. The third study reexamines the relation between equity incentives and different reporting outcomes. The literature review reveals large variation in the empirical measures for firm size as standard control variable, equity incentives as key explanatory variables, and the reporting outcome of interest. Regres-sion results show that these design choices have a direct bearing on empirical results, with changes in t-statistics that often exceed typical thresholds for statistical significance. The find-ings hold for aggressive accrual management, earnings management through discretionary accruals, and material misstatements, suggesting that common design choices can have a large impact on whether equity incentives effects are considered significant or not.
Two approaches for the synthesis of prenylated isoflavones were explored: the 2,3-oxidative rearrangement/cross metathesis approach, using hypervalent iodine reagents as oxidants and the Suzuki-Miyaura cross-coupling/cross metathesis approach. Three natural prenylated isoflavones: 5-deoxy-3′-prenylbiochanin A (59), erysubin F (61) and 7-methoxyebenosin (64), and non-natural analogues: 7,4′-dimethoxy-8,3′-diprenylisoflavone (126j) and 4′-hydroxy-7-methoxy-8,3′-diprenylisoflavone (128) were synthesized for the first time via the 2,3-oxidative rearrangement/cross metathesis approach, using mono- or diallylated flavanones as key intermediates. The reaction of flavanones with hypervalent iodine reagents afforded isoflavones via a 2,3-oxidative rearrangement and the corresponding flavone isomers via a 2,3-dehydrogenation. This afforded the synthesis of 7,4′-dimethoxy-8-prenylflavone (127g), 7,4′-dimethoxy-8,3′-diprenylflavone (127j), 7,4′-dihydroxy-8,3′-diprenylflavone (129) and 4′-hydroxy-7-methoxy-8,3′-diprenylflavone (130), the non-natural regioisomers of 7-methoxyebenosin, 126j, erysubin F and 128 respectively. Three natural prenylated isoflavones: 3′-prenylbiochanin A (58), neobavaisoflavone (66) and 7-methoxyneobavaisoflavone (137) were synthesized for the first time using the Suzuki-Miyaura cross-coupling/cross metathesis approach. The structures of 3′-prenylbiochanin A (58) and 5-deoxy-3′-prenylbiochanin A (59) were confirmed by single crystal X-ray diffraction analysis. The 2,3-oxidative rearrangement approach appears to be limited to the substitution pattern on both rings A and B of the flavanone while the Suzuki-Miyaura cross-coupling approach appears to be the most suitable for the synthesis of simple isoflavones or prenylated isoflavones whose prenyl substituents or allyl groups, the substituents that are essential precursors for the prenyl side chains, can be regioselectively introduced after the construction of the isoflavone core.
The chalcone-flavanone hybrids 146, 147 and 148, hybrids of the naturally occurring bioactive flavanones liquiritigenin-7-methyl ether, liquiritigenin and liquiritigenin-4′-methyl ether respectively were also synthesized for the first time, using Matsuda-Heck arylation and allylic/benzylic oxidation as key steps.
The intermolecular interactions of 5-deoxy-3′-prenylbiochanin A (59) and its two closely related precursors 106a and 106b was investigated by single crystal and Hirshfeld surface analyses to comprehend their different physicochemical properties. The results indicate that the presence of strong intermolecular O-H···O hydrogen bonds and an increase in the number of π-stacking interactions increases the melting point and lowers the solubility of isoflavone derivatives. However, the strong intermolecular O-H···O hydrogen bonds have a greater effect than the π-stacking interactions.
5-Deoxy-3′-prenylbiochanin A (59), erysubin F (61) and 7,4′-dihydroxy-8,3′-diprenylflavone (129), were tested against three bacterial strains and one fungal pathogen. All the three compounds were inactive against Salmonella enterica subsp. enterica (NCTC 13349), Escherichia coli (ATCC 25922), and Candida albicans (ATCC 90028), with MIC values greater than 80.0 μM. The diprenylated isoflavone erysubin F (61) and its flavone isomer 129 showed in vitro activity against methicillin-resistant Staphylococcus aureus (MRSA, ATCC 43300) at MIC values of 15.4 and 20.5 μM, respectively. 5-Deoxy-3′-prenylbiochanin A (59) was inactive against this MRSA strain. Erysubin F (61) and its flavone isomer 129 could serve as lead compounds for the development of new alternative drugs for the treatment of MRSA infections.
Infectious diseases are an increasing threat to biodiversity and human health. Therefore, developing a general understanding of the drivers shaping host-pathogen dynamics is of key importance in both ecological and epidemiological research. Disease dynamics are driven by a variety of interacting processes such as individual host behaviour, spatiotemporal resource availability or pathogen traits like virulence and transmission. External drivers such as global change may modify the system conditions and, thus, the disease dynamics. Despite their importance, many of these drivers are often simplified and aggregated in epidemiological models and the interactions among multiple drivers are neglected.
In my thesis, I investigate disease dynamics using a mechanistic approach that includes both bottom-up effects - from landscape dynamics to individual movement behaviour - as well as top-down effects - from pathogen virulence on host density and contact rates. To this end, I extended an established spatially explicit individual-based model that simulates epidemiological and ecological processes stochastically, to incorporate a dynamic resource landscape that can be shifted away from the timing of host population-dynamics (chapter 2). I also added the evolution of pathogen virulence along a theoretical virulence-transmission trade-off (chapter 3). In chapter 2, I focus on bottom-up effects, specifically how a temporal shift of resource availability away from the timing of biological events of host-species - as expected under global change - scales up to host-pathogen interactions and disease dynamics. My results show that the formation of temporary disease hotspots in combination with directed individual movement acted as key drivers for pathogen persistence even under highly unfavourable conditions for the host. Even with drivers like global change further increasing the likelihood of unfavourable interactions between host species and their environment, pathogens can continue to persist with heir hosts. In chapter 3, I demonstrate that the top-down effect caused by pathogen-associated mortality on its host population can be mitigated by selection for lower virulent pathogen strains when host densities are reduced through mismatches between seasonal resource availability and host life-history events. I chapter 4, I combined parts of both theoretical models into a new model that includes individual host movement decisions and the evolution of pathogenic virulence to simulate pathogen outbreaks in realistic landscapes. I was able to match simulated patterns of pathogen spread to observed patterns from long-term outbreak data of classical swine fever in wild boar in Northern Germany. The observed disease course was best explained by a simulated high virulent strain, whereas sampling schemes and vaccination campaigns could explain differences in the age-distribution of infected hosts. My model helps to understand and disentangle how the combination of individual decision making and evolution of virulence can act as important drivers of pathogen spread and persistence.
As I show across the chapters of this thesis, the interplay of both bottom-up and top-down processes is a key driver of disease dynamics in spatially structured host populations, as they ultimately shape host densities and contact rates among moving individuals. My findings are an important step towards a paradigm shift in disease ecology away from simplified assumptions towards the inclusion of mechanisms, such as complex multi-trophic interactions, and their feedbacks on pathogen spread and disease persistence. The mechanisms presented here should be at the core of realistic predictive and preventive epidemiological models.
Extending synchrotron X-ray refraction techniques to the quantitative analysis of metallic materials
(2022)
In this work, two X-ray refraction based imaging methods, namely, synchrotron X-ray refraction radiography (SXRR) and synchrotron X-ray refraction computed tomography (SXRCT), are applied to analyze quantitatively cracks and porosity in metallic materials. SXRR and SXRCT make use of the refraction of X-rays at inner surfaces of the material, e.g., the surfaces of cracks and pores, for image contrast. Both methods are, therefore, sensitive to smaller defects than their absorption based counterparts X-ray radiography and computed tomography. They can detect defects of nanometric size. So far the methods have been applied to the analysis of ceramic materials and fiber reinforced plastics. The analysis of metallic materials requires higher photon energies to achieve sufficient X-ray transmission due to their higher density. This causes smaller refraction angles and, thus, lower image contrast because the refraction index depends on the photon energy. Here, for the first time, a conclusive study is presented exploring the possibility to apply SXRR and SXRCT to metallic materials. It is shown that both methods can be optimized to overcome the reduced contrast due to smaller refraction angles. Hence, the only remaining limitation is the achievable X-ray transmission which is common to all X-ray imaging methods. Further, a model for the quantitative analysis of the inner surfaces is presented and verified.
For this purpose four case studies are conducted each posing a specific challenge to the imaging task. Case study A investigates cracks in a coupon taken from an aluminum weld seam. This case study primarily serves to verify the model for quantitative analysis and prove the sensitivity to sub-resolution features. In case study B, the damage evolution in an aluminum-based particle reinforced metal-matrix composite is analyzed. Here, the accuracy and repeatability of subsequent SXRR measurements is investigated showing that measurement errors of less than 3 % can be achieved. Further, case study B marks the fist application of SXRR in combination with in-situ tensile loading. Case study C is out of the highly topical field of additive manufacturing. Here, porosity in additively manufactured Ti-Al6-V4 is analyzed with a special interest in the pore morphology. A classification scheme based on SXRR measurements is devised which allows to distinguish binding defects from keyhole pores even if the defects cannot be spatially resolved. In case study D, SXRCT is applied to the analysis of hydrogen assisted cracking in steel. Due to the high X-ray attenuation of steel a comparatively high photonenergy of 50 keV is required here. This causes increased noise and lower contrast in the data compared to the other case studies. However, despite the lower data quality a quantitative analysis of the occurance of cracks in dependence of hydrogen content and applied mechanical load is possible.
Understanding the changes that follow UV-excitation in thionucleobases is of great importance for the study of light-induced DNA lesions and, in a broader context, for their applications in medicine and biochemistry. Their ultrafast photophysical reactions can alter the chemical structure of DNA - leading to damages to the genetic code - as proven by the increased skin cancer risk observed for patients treated with thiouracil for its immunosuppressant properties.
In this thesis, I present four research papers that result from an investigation of the ultrafast dynamics of 2-thiouracil by means of ultrafast x-ray probing combined with electron spectroscopy. A molecular jet in the gas phase is excited with a uv pulse and then ionized with x-ray radiation from a Free Electron Laser. The kinetic energy of the emitted electrons is measured in a magnetic bottle spectrometer. The spectra of the measured photo and Auger electrons are used to derive a picture of the changes in the geometrical and electronic configurations. The results allow us to look at the dynamical processes from a new perspective, thanks to the element- and site- sensitivity of x-rays. The custom-built URSA-PQ apparatus used in the experiment is described. It has been commissioned and used at the FL24 beamline of the FLASH2 FEL, showing an electron kinetic energy resolution of ∆E/E ~ 40 and a pump-probe timing resolution of 190 f s. X-ray only photoelectron and Auger spectra of 2-thiouracil are extracted from the data and used as reference. Photoelectrons following the formation a 2p core hole are identified, as well as resonant and non-resonant Auger electrons. At the L 1 edge, Coster-Kronig decay is observed from the 2s core hole.
The UV-induced changes in the 2p photoline allow the study the electronic-state dynamics. With the use of an Excited-State Chemical Shift (ESCS) model, we observe a ultrafast ground-state relaxation within 250 f s. Furthermore, an oscillation with a 250 f s period is observed in the 2p binding energy, showing a coherent population exchange between electronic states. Auger electrons from the 2p core hole are analyzed and used to deduce a ultrafast C −S bond expansion on a sub 100 f s scale. A simple Coulomb-model, coupled to quantum chemical calculations, can be used to infer the geometrical changes in the molecular structure.
It is well-known that individuals with aphasia (IWA) have difficulties understanding sentences that involve non-adjacent dependencies, such as object relative clauses or passives (Caplan, Baker, & Dehaut, 1985; Caramazza & Zurif, 1976). A large body of research supports the view that IWA’s grammatical system is intact, and that comprehension difficulties in aphasia are caused by a processing deficit, such as a delay in lexical access and/or in syntactic structure building (e.g., Burkhardt, Piñango, & Wong, 2003; Caplan, Michaud, & Hufford, 2015; Caplan, Waters, DeDe, Michaud, & Reddy, 2007; Ferrill, Love, Walenski, & Shapiro, 2012; Hanne, Burchert, De Bleser, & Vasishth, 2015; Love, Swinney, Walenski, & Zurif, 2008). The main goal of this dissertation is to computationally investigate the processing sources of comprehension impairments in sentence processing in aphasia.
In this work, prominent theories of processing deficits coming from the aphasia literature are implemented within two cognitive models of sentence processing –the activation-based model (Lewis & Vasishth, 2005) and the direct-access model (McEl- ree, 2000)–. These models are two different expressions of the cue-based retrieval theory (Lewis, Vasishth, & Van Dyke, 2006), which posits that sentence processing is the result of a series of iterative retrievals from memory. These two models have been widely used to account for sentence processing in unimpaired populations in multiple languages and linguistic constructions, sometimes interchangeably (Parker, Shvarts- man, & Van Dyke, 2017). However, Nicenboim and Vasishth (2018) showed that when both models are implemented in the same framework and fitted to the same data, the models yield different results, because the models assume different data- generating processes. Specifically, the models hold different assumptions regarding the retrieval latencies. The second goal of this dissertation is to compare these two models of cue-based retrieval, using data from individuals with aphasia and control participants. We seek to answer the following question: Which retrieval mechanism is more likely to mediate sentence comprehension?
We model 4 subsets of existing data: Relative clauses in English and German; and control structures and pronoun resolution in German. The online data come from either self-paced listening experiments, or visual-world eye-tracking experiments. The offline data come from a complementary sentence-picture matching task performed at the end of the trial in both types of experiments. The two competing models of retrieval are implemented in the Bayesian framework, following Nicenboim and Vasishth (2018). In addition, we present a modified version of the direct-acess model that – we argue – is more suitable for individuals with aphasia.
This dissertation presents a systematic approach to implement and test verbally- stated theories of comprehension deficits in aphasia within cognitive models of sen- tence processing. The conclusions drawn from this work are that (a) the original direct-access model (as implemented here) cannot account for the full pattern of data from individuals with aphasia because it cannot account for slow misinterpretations; and (b) an activation-based model of retrieval can account for sentence comprehension deficits in individuals with aphasia by assuming a delay in syntactic structure building, and noise in the processing system. The overall pattern of results support an activation-based mechanism of memory retrieval, in which a combination of processing deficits, namely slow syntax and intermittent deficiencies, cause comprehension difficulties in individuals with aphasia.
Bio-sourced adsorbing poly(2-oxazoline)s mimicking mussel glue proteins for antifouling applications
(2022)
Nature developed countless systems for many applications. In maritime environments, several organisms established extra-ordinary mechanisms to attach to surfaces. Over the past years, the scientific interest to employ those mechanisms for coatings and long-lasting adhering materials gained significant attention.
This work describes the synthesis of bio-inspired adsorbing copoly(2-oxazoline)s for surface coatings with protein repelling effects, mimicking mussel glue proteins. From a set of methoxy substituted phenyl, benzyl, and cinnamyl acids, 2-oxazoline monomers were synthesized. All synthesized 2-oxazolines were analyzed by FT-IR spectroscopy, NMR spectroscopy, and EI mass spectrometry. With those newly synthesized 2-oxazoline monomers and 2-ethyl-2-oxazoline, kinetic studies concerning homo- and copolymerization in a microwave reactor were conducted. The success of the polymerization reactions was demonstrated by FT-IR spectroscopy, NMR spectroscopy, MALDI-TOF mass spectrometry, and size exclusion chromatography (SEC). The copolymerization of 2-ethyl-2-oxazoline with a selection of methoxy-substituted 2-oxazolines resulted in water-soluble copolymers. To release the adsorbing catechol and cationic units, the copoly(2-oxazoline)s were modified. The catechol units were (partially) released by a methyl aryl ether cleavage reaction. A subsequent partial acidic hydrolysis of the ethyl unit resulted in mussel glue protein-inspired catechol and cation-containing copolymers. The modified copolymers were analyzed by NMR spectroscopy, UV-VIS spectroscopy, and SEC. The catechol- and cation-containing copolymers and their precursors were examined by a Quartz Crystal Microbalance with Dissipation (QCM-D), so study the adsorption performance on gold, borosilicate, iron, and polystyrene surfaces. An exemplary study revealed that a catechol and cation-containing copoly(2-oxazoline)-coated gold surface exhibits strong protein repelling properties.
The motivation for this work was the question of reliability and robustness of seismic tomography. The problem is that many earth models exist which can describe the underlying ground motion records equally well. Most algorithms for reconstructing earth models provide a solution, but rarely quantify their variability. If there is no way to verify the imaged structures, an interpretation is hardly reliable. The initial idea was to explore the space of equivalent earth models using Bayesian inference. However, it quickly became apparent that the rigorous quantification of tomographic uncertainties could not be accomplished within the scope of a dissertation.
In order to maintain the fundamental concept of statistical inference, less complex problems from the geosciences are treated instead. This dissertation aims to anchor Bayesian inference more deeply in the geosciences and to transfer knowledge from applied mathematics. The underlying idea is to use well-known methods and techniques from statistics to quantify the uncertainties of inverse problems in the geosciences. This work is divided into three parts:
Part I introduces the necessary mathematics and should be understood as a kind of toolbox. With a physical application in mind, this section provides a compact summary of all methods and techniques used. The introduction of Bayesian inference makes the beginning. Then, as a special case, the focus is on regression with Gaussian processes under linear transformations. The chapters on the derivation of covariance functions and the approximation of non-linearities are discussed in more detail.
Part II presents two proof of concept studies in the field of seismology. The aim is to present the conceptual application of the introduced methods and techniques with moderate complexity. The example about traveltime tomography applies the approximation of non-linear relationships. The derivation of a covariance function using the wave equation is shown in the example of a damped vibrating string. With these two synthetic applications, a consistent concept for the quantification of modeling uncertainties has been developed.
Part III presents the reconstruction of the Earth's archeomagnetic field. This application uses the whole toolbox presented in Part I and is correspondingly complex. The modeling of the past 1000 years is based on real data and reliably quantifies the spatial modeling uncertainties. The statistical model presented is widely used and is under active development.
The three applications mentioned are intentionally kept flexible to allow transferability to similar problems. The entire work focuses on the non-uniqueness of inverse problems in the geosciences. It is intended to be of relevance to those interested in the concepts of Bayesian inference.
In this thesis, I present my contributions to the field of ultrafast molecular spectroscopy. Using the molecule 2-thiouracil as an example, I use ultrashort x-ray pulses from free- electron lasers to study the relaxation dynamics of gas-phase molecular samples. Taking advantage of the x-ray typical element- and site-selectivity, I investigate the charge flow and geometrical changes in the excited states of 2-thiouracil.
In order to understand the photoinduced dynamics of molecules, knowledge about the ground-state structure and the relaxation after photoexcitation is crucial. Therefore, a part of this thesis covers the electronic ground-state spectroscopy of mainly 2-thiouracil to provide the basis for the time-resolved experiments. Many of the previously published studies that focused on the gas-phase time-resolved dynamics of thionated uracils after UV excitation relied on information from solution phase spectroscopy to determine the excitation energies. This is not an optimal strategy as solvents alter the absorption spec- trum and, hence, there is no guarantee that liquid-phase spectra resemble the gas-phase spectra. Therefore, I measured the UV-absorption spectra of all three thionated uracils to provide a gas-phase reference and, in combination with calculations, we determined the excited states involved in the transitions.
In contrast to the UV absorption, the literature on the x-ray spectroscopy of thionated uracil is sparse. Thus, we measured static photoelectron, Auger-Meitner and x-ray absorption spectra on the sulfur L edge before or parallel to the time-resolved experiments we performed at FLASH (DESY, Hamburg). In addition, (so far unpublished) measurements were performed at the synchrotron SOLEIL (France) which have been included in this thesis and show the spin-orbit splitting of the S 2p photoline and its satellite which was not observed at the free-electron laser.
The relaxation of 2-thiouracil has been studied extensively in recent years with ultrafast visible and ultraviolet methods showing the ultrafast nature of the molecular process after photoexcitation. Ultrafast spectroscopy probing the core-level electrons provides a complementary approach to common optical ultrafast techniques. The method inherits its local sensitivity from the strongly localised core electrons. The core energies and core-valence transitions are strongly affected by local valence charge and geometry changes, and past studies have utilised this sensitivity to investigate the molecular process reflected by the ultrafast dynamics. We have built an apparatus that provides the requirements to perform time-resolved x-ray spectroscopy on molecules in the gas phase. With the apparatus, we performed UV-pump x-ray-probe electron spectroscopy on the S 2p edge of 2-thiouracil using the free-electron laser FLASH2. While the UV triggers the relaxation dynamics, the x-ray probes the single sulfur atom inside the molecule. I implemented photoline self-referencing for the photoelectron spectral analysis. This minimises the spectral jitter of the FEL, which is due to the underlying self-amplified spontaneous emission (SASE) process. With this approach, we were not only able to study dynamical changes in the binding energy of the electrons but also to detect an oscillatory behaviour in the shift of the observed photoline, which we associate with non-adiabatic dynamics involving several electronic states. Moreover, we were able to link the UV-induced shift in binding energy to the local charge flow at the sulfur which is directly connected to the electronic state. Furthermore, the analysis of the Auger-Meitner electrons shows that energy shifts observed at early stages of the photoinduced relaxation are related to the geometry change in the molecule. More specifically, the observed increase in kinetic energy of the Auger-Meitner electrons correlates with a previously predicted C=S bond stretch.
The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful.
However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis.
Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes.
Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds.
Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis.
Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes.
Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction.
Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.
Complex networks like the Internet or social networks are fundamental parts of our everyday lives. It is essential to understand their structural properties and how these networks are formed. A game-theoretic approach to network design problems has become of high interest in the last decades. The reason is that many real-world networks are the outcomes of decentralized strategic behavior of independent agents without central coordination. Fabrikant, Luthra, Maneva, Papadimitriou, and Schenker proposed a game-theoretic model aiming to explain the formation of the Internet-like networks. In this model, called the Network Creation Game, agents are associated with nodes of a network. Each agent seeks to maximize her centrality by establishing costly connections to other agents. The model is relatively simple but shows a high potential in modeling complex real-world networks. In this thesis, we contribute to the line of research on variants of the Network Creation Games. Inspired by real-world networks, we propose and analyze several novel network creation models. We aim to understand the impact of certain realistic modeling assumptions on the structure of the created networks and the involved agents’ behavior.
The first natural additional objective that we consider is the network’s robustness. We consider a game where the agents seek to maximize their centrality and, at the same time, the stability of the created network against random edge failure.
Our second point of interest is a model that incorporates an underlying geometry. We consider a network creation model where the agents correspond to points in some underlying space and where edge lengths are equal to the distances between the endpoints in that space. The geometric setting captures many physical real-world networks like transport networks and fiber-optic communication networks.
We focus on the formation of social networks and consider two models that incorporate particular realistic behavior observed in real-world networks. In the first model, we embed the anti-preferential attachment link formation. Namely, we assume that the cost of the connection is proportional to the popularity of the targeted agent. Our second model is based on the observation that the probability of two persons to connect is inversely proportional to the length of their shortest chain of mutual acquaintances.
For each of the four models above, we provide a complete game-theoretical analysis. In particular, we focus on distinctive structural properties of the equilibria, the hardness of computing a best response, the quality of equilibria in comparison to the centrally designed socially optimal networks. We also analyze the game dynamics, i.e., the process of sequential strategic improvements by the agents, and analyze the convergence to an equilibrium state and its properties.
The aim of this dissertation was to conduct a larger-scale cross-linguistic empirical investigation of similarity-based interference effects in sentence comprehension.
Interference studies can offer valuable insights into the mechanisms that are involved in long-distance dependency completion.
Many studies have investigated similarity-based interference effects, showing that syntactic and semantic information are employed during long-distance dependency formation (e.g., Arnett & Wagers, 2017; Cunnings & Sturt, 2018; Van Dyke, 2007, Van Dyke & Lewis, 2003; Van Dyke & McElree, 2011). Nevertheless, there are some important open questions in the interference literature that are critical to our understanding of the constraints involved in dependency resolution.
The first research question concerns the relative timing of syntactic and semantic interference in online sentence comprehension. Only few interference studies have investigated this question, and, to date, there is not enough data to draw conclusions with regard to their time course (Van Dyke, 2007; Van Dyke & McElree, 2011).
Our first cross-linguistic study explores the relative timing of syntactic and semantic interference in two eye-tracking reading experiments that implement the study design used in Van Dyke (2007). The first experiment tests English sentences. The second, larger-sample experiment investigates the two interference types in German.
Overall, the data suggest that syntactic and semantic interference can arise simultaneously during retrieval.
The second research question concerns a special case of semantic interference: We investigate whether cue-based retrieval interference can be caused by semantically similar items which are not embedded in a syntactic structure.
This second interference study builds on a landmark study by Van Dyke & McElree (2006). The study design used in their study is unique in that it is able to pin down the source of interference as a consequence of cue overload during retrieval, when semantic retrieval cues do not uniquely match the retrieval target. Unlike most other interference studies, this design is able to rule out encoding interference as an alternative explanation. Encoding accounts postulate that it is not cue overload at the retrieval site but the erroneous encoding of similar linguistic items in memory that leads to interference (Lewandowsky et al., 2008; Oberauer & Kliegl, 2006). While Van Dyke & McElree (2006) reported cue-based retrieval interference from sentence-external distractors, the evidence for this effect was weak. A subsequent study did not show interference of this type (Van Dyke et al., 2014). Given these inconclusive findings, further research is necessary to investigate semantic cue-based retrieval interference.
The second study in this dissertation provides a larger-scale cross-linguistic investigation of cue-based retrieval interference from sentence-external items. Three larger-sample eye-tracking studies in English, German, and Russian tested cue-based interference in the online processing of filler-gap dependencies. This study further extends the previous research by investigating interference in each language under varying task demands (Logačev & Vasishth, 2016; Swets et al., 2008).
Overall, we see some very modest support for proactive cue-based retrieval interference in English. Unexpectedly, this was observed only under a low task demand. In German and Russian, there is some evidence against the interference effect. It is possible that interference is attenuated in languages with richer case marking.
In sum, the cross-linguistic experiments on the time course of syntactic and semantic interference from sentence-internal distractors support existing evidence of syntactic and semantic interference during sentence comprehension. Our data further show that both types of interference effects can arise simultaneously. Our cross-linguistic experiments investigating semantic cue-based retrieval interference from sentence-external distractors suggest that this type of interference may arise only in specific linguistic contexts.
Proteine erfüllen bei einer Vielzahl von Prozessen eine essenzielle Rolle. Um diese Funktionsweisen zu verstehen, bedarf es der Aufklärung derer Struktur und deren Bindungsverhaltens mit anderen Molekülen wie Proteinen, Peptiden, Kohlenhydraten oder kleinen Molekülen. Im ersten Teil dieser Arbeit wurden der Wildtyp und die Punktmutante N126W eines Kohlenhydrat-bindenden Proteins aus dem hitzestabilen Bakterium C. thermocellum untersucht, welches Teil eines Komplexes ist, der Kohlenhydrate wie Cellulose erkennen, binden und abbauen kann. Dazu wurde dieses Protein mit E.coli Bakterien hergestellt und durch Metallchelat- und Größenausschlusschromatographie gereinigt. Die Proteine konnten isotopenmarkiert mittels Kernspinresonanz-Spektroskopie (NMR) untersucht werden. H/D-Austauschexperimente zeigten leicht und schwer zugängliche Stellen im Protein für eine mögliche Ligandenwechselwirkung. Anschließend konnte eine Interaktion beider Proteine mit Cellulosefragmenten festgestellt werden. Diese interagieren über zwischenmolekulare Kräfte mit den Seitenketten von aromatischen Aminosäuren und über Wasserstoffbrückenbindungen mit anderen Resten. Weiterhin wurde die Calcium-Bindestelle analysiert und es konnte gezeigt werden, das diese nach der Proteinherstellung mit einem Calcium-Ion besetzt ist und dieses mit dem Komplexbildner EDTA entfernbar ist, jedoch wieder reversibel besetzt werden kann. Zum Schluss wurde mittels zweier Methoden versucht (grafting from und grafting to), das Protein mit einem temperatursensorischen Polymer (Poly-N-Isopropylacrylamid) zu koppeln, um so Eigenschaften wie Löslichkeit oder Stabilität zu beeinflussen. Es zeigte sich, das während die grafting from Methode (Polymer wächst direkt vom Protein) zu einer teilweisen Entfaltung und Destabilisierung des Proteins führte, bei der grafting to Methode (Polymer wird separat hergestellt und dann an das Protein gekoppelt) das Protein seine Stabilität behielt und nur wenige Polymerketten angebaut waren. Der zweite Teil dieser Arbeit beschäftigte sich mit der Interaktion von zwei LIM-Domänen des Proteins Paxillin und der zytoplasmatischen Domäne der Peptide Integrin-β1 und Integrin-β3. Diese spielen eine wichtige Rolle bei der Bewegung von Zellen. Dabei interagieren sie mit einer Vielzahl an anderen Proteinen, um fokale Adhäsionen (Multiproteinkomplexe) zu bilden. Bei der Herstellung des Peptids Integrin-β3 zeigte sich durch Größenausschlusschromatographie und Massenspektrometrie ein Abbau, bei dem verschiedene Aminosäuregruppen abgespalten werden. Dieser konnte durch eine Zugabe des Serinprotease-Inhibitors AEBSF verhindert werden. Anschließend wurde die direkte Interaktion der Proteine untereinander mittels NMR untersucht. Dabei zeigte sich, das Integrin-β1 und Integrin-β3 an die gleiche Position binden, nämlich an den flexiblen Loop der LIM3-Domäne von Paxillin. Die Dissoziationskonstanten zeigten, dass Integrin-β1 mit einer zirka zehnfach höheren Affinität im Vergleich zu Integrin-β3 an Paxillin bindet. Während Paxillins Bindestelle an Integrin-β1 in der Mitte des Peptids liegt, ist bei Integrin-β3 der C-Terminus essenziell. Daher wurden die drei C-terminalen Aminosäuren entfernt und erneut Bindungsstudien durchgeführt, welche gezeigt haben, das die Affinität dadurch fast vollständig unterbunden wurde. Final wurde der flexible Loop der LIM3-Domäne in zwei andere Aminosäuresequenzen mutiert, um die Bindung auf der Paxillin-Seite auszulöschen. Jedoch zeigten sowohl Zirkulardichroismus-Spektroskopie als auch NMR-Spektroskopie, dass die Mutationen zu einer teilweisen Entfaltung der Domäne geführt haben und somit nicht als geeignete Kandidaten für diese Studien identifiziert werden konnten.
One aspect of achieving a more sustainable chemical industry is the minimization of the usage of solvents and chemicals. Thus, optimization and development of chemical processes for large-scale production is favourably performed in small batches. The critical step in this approach is upscaling the batches from the small reaction systems to the large reactors mandatory for cost efficient production in an industrial environment. Scaling up the bulk volume always goes along with increasing the surface where the reaction medium is in contact with the confining vessel. Since volume scales proportional with the cubic dimension while the surface scales quadratic, their ratio is size-dependent. The influence of reaction vessel walls can change the reaction performance. A number of phenomena occurring at the surface-liquid interface can affect reaction rates and yields, resulting in possible difficulties in predicting and extrapolating from small size production scale to large industrial processes. The application of levitated droplets as a containerless reaction vessels provides a promising possibility to avoid the above-mentioned issues.
In the presented work, an efficient coupling of acoustically levitated droplets to an ion mobility (IM) spectrometer, operating at ambient conditions, was designed for real-time monitoring of chemical reactions. The design of the system comprises noncontact sampling and ionization of the droplet realised by laser desorption/ionization at 2,94 µm. The scope of the work includes fundamental studies covering understanding of laser irradiation of droplets enclosed in an acoustical field. Understanding of this phenomenon is crucial to comprehending the effects of temporal and spatial resolution of the generated ion plume that influence the resolution of the system.
The set-up includes an acoustic trap, laser irradiation and ion manipulation electrostatic lenses operating at high voltage at ambient pressure. The complexity of the design needs to fully be considered for an effective ion transfer at the interface region between the levitated droplet and IM spectrometer. For sampling and ionization, two distinct laser pulse lengths were evaluated, ns and µs. Irradiation via µs laser pulses provides several advantages: i) the droplet volume is not extensively impinged, as in case of ns laser pulses, allowing the sampling of only the small volume of the droplet; ii) the lower fluence results in less pronounced oscillations of the droplet confined in the acoustic field. The droplet will not be dissipated out of the acoustic field leading to loss of the sample; iii) the mild laser irradiation results in better spatial and temporal ion plume confinement, leading to better resolution of the detected ion packets. Finally, this knowledge allows the application of ion optics necessary to induce ion flow between the droplet suspended in the acoustic field and the IM spectrometer. The ion optics, composed of 2 electrostatic lenses placed in the near vicinity of the droplet, allow effective focusing of the ion plume and its redirection directly to the IM spectrometer entrance. This novel coupling has proved to be successful for detection of some simple molecules ionizable at the 2.94 µm wavelength. To further demonstrate the applicability of the system, a proof-of-principle reaction was selected, fulfilling the requirements of the system, and was subjected to comprehensive investigation of its performance. Herein, the reaction between N-Boc cysteine methyl ester and allyl alcohol has been performed in a batch reactor and on-line monitored via 1H NMR to establish reaction propagation. With the additional assessment, it was confirmed that the thiol-ene coupling can be performed within first 20 minutes of the irradiation with a reaction yield above 50%, proving that the reaction can be applied as a study case to assess the possibilities of the developed system.
Variation in traits permeates and affects all levels of biological organisation, from within individuals to between species. Yet, intraspecific trait variation (ITV) is not sufficiently represented in many ecological theories. Instead, species averages are often assumed. Especially ITV in behaviour has only recently attracted more attention as its pervasiveness and magnitude became evident. The surge in interest in ITV in behaviour was accompanied by a methodological and technological leap in the field of movement ecology. Many aspects of behaviour become visible via movement, allowing us to observe inter-individual differences in fundamental processes such as foraging, mate searching, predation or migration. ITV in movement behaviour may result from within-individual variability and consistent, repeatable among-individual differences. Yet, questions on why such among-individual differences occur in the first place and how they are integrated with life-history have remained open. Furthermore, consequences of ITV, especially of among-individual differences in movement behaviour, on populations and species communities are not sufficiently understood. In my thesis, I approach timely questions on the sources and consequences of ITV, particularly, in movement behaviour. After outlining fundamental concepts and the current state of knowledge, I approach these questions by using agent-based models to integrate concepts from behavioural and movement ecology and to develop novel perspectives.
Modern coexistence theory is a central pillar of community ecology, yet, insufficiently considers ITV in behaviour. In chapter 2, I model a competitive two-species system of ground-dwelling, central-place foragers to investigate the consequences of among-individual differences in movement behaviour on species coexistence. I show that the simulated among-individual differences, which matched with empirical data, reduce fitness differences betweem species, i.e. provide an equalising coexistence mechanism. Furthermore, I explain this result mechanistically and, thus, resolve an apparent ambiguity of the consequences of ITV on species coexistence described in previous studies.
In chapter 3, I turn the focus to sources of among-individual differences in movement behaviour and their potential integration with life-history. The pace-of-life syndrome (POLS) theory predicts that the covariation between among-individual differences in behaviour and life-history is mediated by a trade-off between early and late reproduction. This theory has generated attention but is also currently scrutinised. In chapter 3, I present a model which supports a recent conceptual development that suggests fluctuating density-dependent selection as a cause of the POLS. Yet, I also identified processes that may alter the association between movement behaviour and life-history across levels of biological organization.
ITV can buffer populations, i.e. reduce their extinction risk. For instance, among-individual differences can mediate portfolio effects or increase evolvability and, thereby, facilitate rapid evolution which can alleviate extinction risk. In chapter 4, I review ITV, environmental heterogeneity, and density-dependent processes which constitute local buffer mechanisms. In the light of habitat isolation, which reduces connectivity between populations, local buffer mechanisms may become more relevant compared to dispersal-related regional buffer mechanisms. In this chapter, I argue that capacities, latencies, and interactions of local buffer mechanisms should motivate more process-based and holistic integration of local buffer mechanisms in theoretical and empirical studies.
Recent perspectives propose to apply principles from movement and community ecology to study filamentous fungi. It is an open question whether and how the arrangement and geometry of microstructures select for certain movement traits, and, thus, facilitate coexistence-stabilising niche partitioning. As a coauthor of chapter 5, I developed an agent-based model of hyphal tips navigating in soil-like microstructures along a gradient of soil porosity. By measuring network properties, we identified changes in the optimal movement behaviours along the gradient. Our findings suggest that the soil architecture facilitates niche partitioning.
The core chapters are framed by a general introduction and discussion. In the general introduction, I outline fundamental concepts of movement ecology and describe theory and open questions on sources and consequences of ITV in movement behaviour. In the general discussion, I consolidate the findings of the core chapters and critically discuss their respective value and, if applicable, their impact. Furthermore, I emphasise promising avenues for further research.
Why do exercises in collaborative governance often witness more impasse than advantage? This cumulative dissertation undertakes a micro-level analysis of collaborative governance to tackle this research puzzle. It situates micropolitics at the very center of analysis: a wide range of activities, interventions, and tactics used by actors – be they conveners, facilitators, or participants – to shape the collaborative exercise. It is by focusing on these daily minutiae, and on the consequences that they bring along, the study argues, that we can better understand why and how collaboration can become stuck or unproductive. To do so, the foundational part of this dissertation (Article 1) uses power as a sensitizing concept to investigate the micro-dynamics that shape collaboration. It develops an analytical approach to advance the study of collaborative governance at the empirical level under a power-sensitive and process-oriented perspective. The subsequent articles follow the dissertation's red thread of investigating the micropolitics of collaborative governance by showing facilitation artefacts' interrelatedness and contribution to the potential success or failure of collaborative arrangements (Article 2); and by examining the specialized knowledge, skills and practices mobilized when designing a collaborative process (Article 3). The work is based on an abductive research approach, tacking back and forth between empirical data and theory, and offers a repertoire of concepts – from analytical terms (designed and emerging interaction orders, flows of power, arenas for power), to facilitation practices (scripting, situating, and supervising) and types of knowledge (process expertise) – to illustrate and study the detailed and constant work (and rework) that surrounds collaborative arrangements. These concepts sharpen the way researchers can look at, observe, and understand collaborative processes at a micro level. The thesis thereby elucidates the subtleties of power, which may be overlooked if we focus only on outcomes rather than the processes that engender them, and supports efforts to identify potential sources of impasse.
Macrophages play an integral role for the innate immune system. It is critically important for basic research and therapeutic applications to find approaches to potentially modulate their function as the first line of defense. Transient genetic engineering via delivery of synthetic mRNA can serve for such purposes as a robust, reliable and safe technology to modulate macrophage functions. However, a major drawback particularly in the transfection of sensitive immune cells such as macrophages is the immunogenicity of exogenous IVT-mRNAs. Consequently, the direct modulation of human macrophage activity by mRNA-mediated genetic engineering was the aim of this work. The synthetic mRNA can instruct macrophages to synthesize specific target proteins, which can steer macrophage activity in a tailored fashion. Thus, the focus of this dissertation was to identify parameters triggering unwanted immune activation of macrophages, and to find approaches to minimize such effects. When comparing different carrier types as well as mRNA chemistries, the latter had unequivocally a more pronounced impact on activation of human macrophages and monocytes. Exploratory investigations revealed that the choice of nucleoside chemistry, particularly of modified uridine, plays a crucial role for IVT-mRNA-induced immune activation, in a dose-dependent fashion. Additionally, the contribution of the various 5’ cap structures tested was only minor. Moreover, to address the technical aspects of the delivery of multiple genes as often mandatory for advanced gene delivery studies, two different strategies of payload design were investigated, namely “bicistronic” delivery and “monocistronic” co-delivery. The side-by-side comparison of mRNA co-delivery via a bicistronic design (two genes, one mRNA) with a monocistronic design (two gene, two mRNAs) unexpectedly revealed that, despite the intrinsic equimolar nature of the bicistronic approach, it was outperformed by the monocistronic approach in terms of reliable co-expression when quantified on the single cell level. Overall, the incorporation of chemical modifications into IVT-mRNA by using respective building blocks, primarily with the aim to minimize immune activation as exemplified in this thesis, has the potential to facilitate the selection of the proper mRNA chemistry to address specific biological and clinical challenges. The technological aspects of gene delivery evaluated and validated by the quantitative methods allowed us to shed light on crucial process parameters and mRNA design criteria, required for reliable co-expression schemes of IVT-mRNA delivery.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
Stellar interferometry is the only method in observational astronomy for obtaining the highest resolution images of astronomical targets. This method is based on combining light from two or more separate telescopes to obtain the complex visibility that contains information about the brightness distribution of an astronomical source. The applications of stellar interferometry have made significant contributions in the exciting research areas of astronomy and astrophysics, including the precise measurement of stellar diameters, imaging of stellar surfaces, observations of circumstellar disks around young stellar objects, predictions of Einstein's General relativity at the galactic center, and the direct search for exoplanets to name a few. One important related technique is aperture masking interferometry, pioneered in the 1960s, which uses a mask with holes at the re-imaged pupil of the telescope, where the light from the holes is combined using the principle of stellar interferometry. While this can increase the resolution, it comes with a disadvantage. Due to the finite size of the holes, the majority of the starlight (typically > 80 %) is lost at the mask, thus limiting the signal-to-noise ratio (SNR) of the output images. This restriction of aperture masking only to the bright targets can be avoided using pupil remapping interferometry - a technique combining aperture masking interferometry and advances in photonic technologies using single-mode fibers. Due to the inherent spatial filtering properties, the single-mode fibers can be placed at the focal plane of the re-imaged pupil, allowing the utilization of the whole pupil of the telescope to produce a high-dynamic range along with high-resolution images. Thus, pupil remapping interferometry is one of the most promising application areas in the emerging field of astrophotonics.
At the heart of an interferometric facility, a beam combiner exists whose primary function is to combine light to obtain high-contrast fringes. A beam combiner can be as simple as a beam splitter or an anamorphic lens to combine light from 2 apertures (or telescopes) or as complex as a cascade of beam splitters and lenses to combine light for > 2 apertures. However, with the field of astrophotonics, interferometric facilities across the globe are increasingly employing some form of photonics technologies by using single-mode fibers or integrated optics (IO) chips as an efficient way to combine light from several apertures. The state-of-the-art instrument - GRAVITY at the very large telescope interferometer (VLTI) facility uses an IO-based beam combiner device reaching visibilities accuracy of better than < 0.25 %, which is roughly 50× as precise as a few decades back.
Therefore, in the context of IO-based components for applications in stellar interferometry, this Thesis describes the work towards the development of a 3-dimensional (3-D) IO device - a monolithic astrophotonics component containing both the pupil remappers and a discrete beam combiner (DBC). In this work, the pupil remappers are 3-D single-mode waveguides in a glass substrate collecting light from the re-imaged pupil of the telescope and feeding the light to a DBC, where the combination takes place. The DBC is a lattice of 3-D single-mode waveguides, which interact through evanescent coupling. By observing the output power of single-mode waveguides of the DBC, the visibilities are retrieved by using a calibrated transfer matrix ({U}) of the device.
The feasibility of the DBC in retrieving the visibilities theoretically and experimentally had already been studied in the literature but was only limited to laboratory tests with monochromatic light sources. Thus, a part of this work extends these studies by investigating the response of a 4-input DBC to a broad-band light source. Hence, the objectives of this Thesis are the following: 1) Design an IO device for broad-band light operation such that accurate and precise visibilities could be retrieved experimentally at astronomical H-band (1.5-1.65 μm), and 2) Validation of the DBC as a possible beam combination scheme for future interferometric facilities through on-sky testing at the William Herschel Telescope (WHT).
This work consisted of designing three different 3-D IO devices. One of the popular methods for fabricating 3-D photonic components in a glass substrate is ultra-fast laser inscription (ULI). Thus, manufacturing of the designed devices was outsourced to Politecnico di Milano as part of an iterative fabrication process using their state-of-the-art ULI facility. The devices were then characterized using a 2-beam Michelson interferometric setup obtaining both the monochromatic and polychromatic visibilities. The retrieved visibilities for all devices were in good agreement as predicted by the simulation results of a DBC, which confirms both the repeatability of the ULI process and the stability of the Michelson setup, thus fulfilling the first objective.
The best-performing device was then selected for the pupil-remapping of the WHT using a different optical setup consisting of a deformable mirror and a microlens array. The device successfully collected stellar photons from Vega and Altair. The visibilities were retrieved using a previously calibrated {U} but showed significant deviations from the expected results. Based on the analysis of comparable simulations, it was found that such deviations were primarily caused by the limited SNR of the stellar observations, thus constituting a first step towards the fulfillment of the second objective.
Die vorliegende kumulative Promotionsarbeit beschäftigt sich mit leistungsstarken Schülerinnen und Schülern, die seit 2015 in der deutschen Bildungspolitik, zum Beispiel im Rahmen von Förderprogrammen wieder mehr Raum einnehmen, nachdem in Folge des „PISA-Schocks“ im Jahr 2000 zunächst der Fokus stärker auf den Risikogruppen lag. Während leistungsstärkere Schülerinnen und Schüler in der öffentlichen Wahrnehmung häufig mit „(Hoch-)Begabten“ identifiziert werden, geht die Arbeit über die traditionelle Begabungsforschung, die eine generelle Intelligenz als Grundlage für Leistungsfähigkeit von Schülerinnen und Schülern begreift und beforscht, hinaus. Stattdessen lässt sich eher in den Bereich der Talentforschung einordnen, die den Fokus weg von allgemeinen Begabungen auf spezifische Prädiktoren und Outcomes im individuellen Entwicklungsverlauf legt. Der Fokus der Arbeit liegt daher nicht auf Intelligenz als Potenzial, sondern auf der aktuellen schulischen Leistung, die als Ergebnis und Ausgangspunkt von Entwicklungsprozessen in einer Leistungsdomäne doppelte Bedeutung erhält.
Die Arbeit erkennt die Vielgestaltigkeit des Leistungsbegriffs an und ist bestrebt, neue Anlässe zu schaffen, über den Leistungsbegriff und seine Operationalisierung in der Forschung zu diskutieren. Hierfür wird im ersten Teil ein systematisches Review zur Operationalisierung von Leistungsstärke durchgeführt (Artikel I). Es werden Faktoren herausgearbeitet, auf welchen sich die Operationalisierungen unterscheiden können. Weiterhin wird ein Überblick gegeben, wie Studien zu Leistungsstarken sich seit dem Jahr 2000 auf diesen Dimensionen verorten lassen. Es zeigt sich, dass eindeutige Konventionen zur Definition schulischer Leistungsstärke noch nicht existieren, woraus folgt, dass Ergebnisse aus Studien, die sich mit leistungsstarken Schülerinnen und Schülern beschäftigen, nur bedingt miteinander vergleichbar sind. Im zweiten Teil der Arbeit wird im Rahmen zwei weiterer Artikel, welche sich mit der Leistungsentwicklung (Artikel II) und der sozialen Einbindung (Artikel III) von leistungsstarken Schülerinnen und Schülern befassen, darauf aufbauend der Ansatz verfolgt, die Variabilität von Ergebnissen über verschiedene Operationalisierungen von Leistungsstärke deutlich zu machen. Damit wird unter anderem auch die künftige Vergleichbarkeit mit anderen Studien erleichtert. Genutzt wird dabei das Konzept der Multiversumsanalyse (Steegen et al., 2016), bei welcher viele parallele Spezifikationen, die zugleich sinnvolle Alternativen für die Operationalisierung darstellen, nebeneinandergestellt und in ihrem Effekt verglichen werden (Jansen et al., 2021). Die Multiversumsanalyse knüpft konzeptuell an das bereits vor längerem entwickelte Forschungsprogramm des kritischen Multiplismus an (Patry, 2013; Shadish, 1986, 1993), erhält aber als spezifische Methode aktuell im Rahmen der Replizierbarkeitskrise in der Psychologie eine besondere Bedeutung. Dabei stützt sich die vorliegende Arbeit auf die Sekundäranalyse großangelegter Schulleistungsstudien, welche den Vorteil besitzen, dass eine große Zahl an Datenpunkten (Variablen und Personen) zur Verfügung steht, um Effekte unterschiedlicher Operationalisierungen zu vergleichen.
Inhaltlich greifen Artikel II und III Themen auf, die in der wissenschaftlichen und gesellschaftlichen Diskussion zu Leistungsstarken und ihrer Wahrnehmung in der Öffentlichkeit immer wieder aufscheinen: In Artikel II wird zunächst die Frage gestellt, ob Leistungsstarke bereits im aktuellen Regelunterricht einen kumulativen Vorteil gegenüber ihren weniger leistungsstarken Mitschülerinnen und Mitschülern haben (Matthäus-Effekt). Die Ergebnisse zeigen, dass an Gymnasien keineswegs von sich vergrößernden Unterschieden gesprochen werden kann. Im Gegenteil, es verringerte sich im Laufe der Sekundarstufe der Abstand zwischen den Gruppen, indem die Lernraten bei leistungsschwächeren Schülerinnen und Schülern höher waren. Artikel III hingegen betrifft die soziale Wahrnehmung von leistungsstarken Schülerinnen und Schülern. Auch hier hält sich in der öffentlichen Diskussion die Annahme, dass höhere Leistungen mit Nachteilen in der sozialen Integration einhergehen könnten, was sich auch in Studien widerspiegelt, die sich mit Geschlechterstereotypen Jugendlicher in Bezug auf Schulleistung beschäftigen. In Artikel III wird unter anderem erneut das Potenzial der Multiversumsanalyse genutzt, um die Variation des Zusammenhangs über Operationalisierungen von Leistungsstärke zu beschreiben. Es zeigt sich unter unterschiedlichen Operationalisierungen von Leistungsstärke und über verschiedene Facetten sozialer Integration hinweg, dass die Zusammenhänge zwischen Leistung und sozialer Integration insgesamt leicht positiv ausfallen. Annahmen, die auf differenzielle Effekte für Jungen und Mädchen oder für unterschiedliche Fächer abzielen, finden in diesen Analysen keine Bestätigung.
Die Dissertation zeigt, dass der Vergleich unterschiedlicher Ansätze zur Operationalisierung von Leistungsstärke — eingesetzt im Rahmen eines kritischen Multiplismus — das Verständnis von Phänomenen vertiefen kann und auch das Potenzial hat, Theorieentwicklung voranzubringen.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the Äspö Hard Rock Laboratory (Sweden).
I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1).
An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding.
To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the Äspö mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).
Polyglot programming allows developers to use multiple programming languages within the same software project. While it is common to use more than one language in certain programming domains, developers also apply polyglot programming for other purposes such as to re-use software written in other languages. Although established approaches to polyglot programming come with significant limitations, for example, in terms of performance and tool support, developers still use them to be able to combine languages.
Polyglot virtual machines (VMs) such as GraalVM provide a new level of polyglot programming, allowing languages to directly interact with each other. This reduces the amount of glue code needed to combine languages, results in better performance, and enables tools such as debuggers to work across languages. However, only a little research has focused on novel tools that are designed to support developers in building software with polyglot VMs. One reason is that tool-building is often an expensive activity, another one is that polyglot VMs are still a moving target as their use cases and requirements are not yet well understood.
In this thesis, we present an approach that builds on existing self-sustaining programming systems such as Squeak/Smalltalk to enable exploratory programming, a practice for exploring and gathering software requirements, and re-use their extensive tool-building capabilities in the context of polyglot VMs. Based on TruffleSqueak, our implementation for the GraalVM, we further present five case studies that demonstrate how our approach helps tool developers to design and build tools for polyglot programming. We further show that TruffleSqueak can also be used by application developers to build and evolve polyglot applications at run-time and by language and runtime developers to understand the dynamic behavior of GraalVM languages and internals. Since our platform allows all these developers to apply polyglot programming, it can further help to better understand the advantages, use cases, requirements, and challenges of polyglot VMs. Moreover, we demonstrate that our approach can also be applied to other polyglot VMs and that insights gained through it are transferable to other programming systems.
We conclude that our research on tools for polyglot programming is an important step toward making polyglot VMs more approachable for developers in practice. With good tool support, we believe polyglot VMs can make it much more common for developers to take advantage of multiple languages and their ecosystems when building software.
Climate change and human-driven eutrophication promote the spread of harmful cyanobacteria blooms in lakes worldwide, which affects water quality and impairs the aquatic food chain. In recent times, sedimentary ancient DNA-based (sedaDNA) studies were used to probe how centuries of climate and environmental changes have affected cyanobacterial assemblages in temperate lakes. However, there is a lack of information on the consistency between sediment-deposited cyanobacteria communities versus those of the water column, and on the individual role of natural climatic changes versus human pressure on cyanobacteria community dynamics over multi-millennia time scales.
Therefore, this thesis uses sedimentary ancient DNA of Lake Tiefer See in northeastern Germany to trace the deposition of cyanobacteria along the water column into the sediment, and to reconstruct cyanobacteria communities spanning the last 11,000 years using a set of molecular techniques including quantitative PCR, biomarkers, metabarcoding, and metagenome sequence analyses.
The results of this thesis proved that cyanobacterial composition and species richness did not significantly differ among different water depths, sediment traps, and surface sediments. This means that the cyanobacterial community composition from the sediments reflects the water column communities. However, there is a skewed sediment deposition of different cyanobacteria groups because of DNA alteration and/or deterioration during transport along the water column to the sediment. Specifically, single filament taxa, such as Planktothrix, are poorly represented in sediments despite being abundant in the water column as shown by an additional study of the thesis on cyanobacteria seasonality. In contrast, aggregate-forming taxa, like Aphanizomenon, are relatively overrepresented in sediment although they are not abundant in the water column. These different deposition patterns of cyanobacteria taxa should be considered in future DNA-based paleolimnological investigations. The thesis also reveals a substantial increase in total cyanobacteria abundance during the Bronze Age which is not apparent in prior phases of the early to middle Holocene and is suggested to be caused by human farming, deforestation, and excessive nutrient addition to the lake. Not only cyanobacterial abundance was influenced by human activity but also cyanobacteria community composition differed significantly between phases of no, moderate, and intense human impact.
The data presented in this thesis are the first on sedimentary cyanobacteria DNA since the early Holocene in a temperate lake. The results bring together archaeological, historical climatic, and limnological data with deep DNA-sequencing and paleoecology to reveal a legacy impact of human pressure on lake cyanobacteria populations dating back to approximately 4000 years.