Refine
Year of publication
Document Type
- Doctoral Thesis (6472) (remove)
Language
Keywords
- climate change (55)
- Klimawandel (54)
- Modellierung (36)
- Nanopartikel (28)
- machine learning (22)
- Fernerkundung (20)
- Spracherwerb (19)
- Synchronisation (19)
- Deutschland (18)
- remote sensing (18)
Institute
- Institut für Biochemie und Biologie (1037)
- Institut für Physik und Astronomie (779)
- Institut für Chemie (676)
- Institut für Geowissenschaften (504)
- Wirtschaftswissenschaften (407)
- Institut für Ernährungswissenschaft (278)
- Öffentliches Recht (251)
- Bürgerliches Recht (220)
- Historisches Institut (213)
- Institut für Informatik und Computational Science (204)
Additive manufacturing (AM) processes enable the production of metal structures with exceptional design freedom, of which laser powder bed fusion (PBF-LB) is one of the most common. In this process, a laser melts a bed of loose feedstock powder particles layer-by-layer to build a structure with the desired geometry. During fabrication, the repeated melting and rapid, directional solidification create large temperature gradients that generate large thermal stress. This thermal stress can itself lead to cracking or delamination during fabrication. More often, large residual stresses remain in the final part as a footprint of the thermal stress. This residual stress can cause premature distortion or even failure of the part in service. Hence, knowledge of the residual stress field is critical for both process optimization and structural integrity.
Diffraction-based techniques allow the non-destructive characterization of the residual stress fields. However, such methods require a good knowledge of the material of interest, as certain assumptions must be made to accurately determine residual stress. First, the measured lattice plane spacings must be converted to lattice strains with the knowledge of a strain-free material state. Second, the measured lattice strains must be related to the macroscopic stress using Hooke's law, which requires knowledge of the stiffness of the material. Since most crystal structures exhibit anisotropic material behavior, the elastic behavior is specific to each lattice plane of the single crystal. Thus, the use of individual lattice planes in monochromatic diffraction residual stress analysis requires knowledge of the lattice plane-specific elastic properties. In addition, knowledge of the microstructure of the material is required for a reliable assessment of residual stress.
This work presents a toolbox for reliable diffraction-based residual stress analysis. This is presented for a nickel-based superalloy produced by PBF-LB. First, this work reviews the existing literature in the field of residual stress analysis of laser-based AM using diffraction-based techniques. Second, the elastic and plastic anisotropy of the nickel-based superalloy Inconel 718 produced by PBF-LB is studied using in situ energy dispersive synchrotron X-ray and neutron diffraction techniques. These experiments are complemented by ex situ material characterization techniques. These methods establish the relationship between the microstructure and texture of the material and its elastic and plastic anisotropy. Finally, surface, sub-surface, and bulk residual stress are determined using a texture-based approach. Uncertainties of different methods for obtaining stress-free reference values are discussed.
The tensile behavior in the as-built condition is shown to be controlled by texture and cellular sub-grain structure, while in the heat-treated condition the precipitation of strengthening phases and grain morphology dictate the behavior. In fact, the results of this thesis show that the diffraction elastic constants depend on the underlying microstructure, including texture and grain morphology. For columnar microstructures in both as-built and heat-treated conditions, the diffraction elastic constants are best described by the Reuss iso-stress model. Furthermore, the low accumulation of intergranular strains during deformation demonstrates the robustness of using the 311 reflection for the diffraction-based residual stress analysis with columnar textured microstructures. The differences between texture-based and quasi-isotropic approaches for the residual stress analysis are shown to be insignificant in the observed case. However, the analysis of the sub-surface residual stress distributions show, that different scanning strategies result in a change in the orientation of the residual stress tensor. Furthermore, the location of the critical sub-surface tensile residual stress is related to the surface roughness and the microstructure. Finally, recommendations are given for the diffraction-based determination and evaluation of residual stress in textured additively manufactured alloys.
Preisalgorithmenkartelle
(2024)
Mithilfe von Preisalgorithmen sind Unternehmen in der Lage, automatische und wechselseitige Preisanpassungen vorzunehmen. Dadurch können klassische Kartellkonstellationen mangels konspirativer Treffen in den Hintergrund treten. Die Arbeit zeigt auf, unter welchen Voraussetzungen der Einsatz von Preisalgorithmen einen Verstoß gegen das europäische Kartellverbot begründen kann. Dazu werden Fallkonstellationen beleuchtet, die ein algorithmisches Zusammenwirken sowohl unmittelbar zwischen Wettbewerbern als auch mittelbar über einen Dritten begründen. Ferner wird auch auf algorithmenspezifische Compliance-Maßnahmen eingegangen. Schließlich werden die praktischen Herausforderungen bei der Aufdeckung und dem Nachweis solcher Kartelle aufgezeigt.
The European Water Framework Directive (WFD) has identified river morphological alteration and diffuse pollution as the two main pressures affecting water bodies in Europe at the catchment scale. Consequently, river restoration has become a priority to achieve the WFD's objective of good ecological status. However, little is known about the effects of stream morphological changes, such as re-meandering, on in-stream nitrate retention at the river network scale. Therefore, catchment nitrate modeling is necessary to guide the implementation of spatially targeted and cost-effective mitigation measures. Meanwhile, Germany, like many other regions in central Europe, has experienced consecutive summer droughts from 2015-2018, resulting in significant changes in river nitrate concentrations in various catchments. However, the mechanistic exploration of catchment nitrate responses to changing weather conditions is still lacking.
Firstly, a fully distributed, process-based catchment Nitrate model (mHM-Nitrate) was used, which was properly calibrated and comprehensively evaluated at numerous spatially distributed nitrate sampling locations. Three calibration schemes were designed, taking into account land use, stream order, and mean nitrate concentrations, and they varied in spatial coverage but used data from the same period (2011–2019). The model performance for discharge was similar among the three schemes, with Nash-Sutcliffe Efficiency (NSE) scores ranging from 0.88 to 0.92. However, for nitrate concentrations, scheme 2 outperformed schemes 1 and 3 when compared to observed data from eight gauging stations. This was likely because scheme 2 incorporated a diverse range of data, including low discharge values and nitrate concentrations, and thus provided a better representation of within-catchment heterogenous. Therefore, the study suggests that strategically selecting gauging stations that reflect the full range of within-catchment heterogeneity is more important for calibration than simply increasing the number of stations.
Secondly, the mHM-Nitrate model was used to reveal the causal relations between sequential droughts and nitrate concentration in the Bode catchment (3200 km2) in central Germany, where stream nitrate concentrations exhibited contrasting trends from upstream to downstream reaches. The model was evaluated using data from six gauging stations, reflecting different levels of runoff components and their associated nitrate-mixing from upstream to downstream. Results indicated that the mHM-Nitrate model reproduced dynamics of daily discharge and nitrate concentration well, with Nash-Sutcliffe Efficiency ≥ 0.73 for discharge and Kling-Gupta Efficiency ≥ 0.50 for nitrate concentration at most stations. Particularly, the spatially contrasting trends of nitrate concentration were successfully captured by the model. The decrease of nitrate concentration in the lowland area in drought years (2015-2018) was presumably due to (1) limited terrestrial export loading (ca. 40% lower than that of normal years 2004-2014), and (2) increased in-stream retention efficiency (20% higher in summer within the whole river network). From a mechanistic modelling perspective, this study provided insights into spatially heterogeneous flow and nitrate dynamics and effects of sequential droughts, which shed light on water-quality responses to future climate change, as droughts are projected to be more frequent.
Thirdly, this study investigated the effects of stream restoration via re-meandering on in-stream nitrate retention at network-scale in the well-monitored Bode catchment. The mHM-Nitrate model showed good performance in reproducing daily discharge and nitrate concentrations, with median Kling-Gupta values of 0.78 and 0.74, respectively. The mean and standard deviation of gross nitrate retention efficiency, which accounted for both denitrification and assimilatory uptake, were 5.1 ± 0.61% and 74.7 ± 23.2% in winter and summer, respectively, within the stream network. The study found that in the summer, denitrification rates were about two times higher in lowland sub-catchments dominated by agricultural lands than in mountainous sub-catchments dominated by forested areas, with median ± SD of 204 ± 22.6 and 102 ± 22.1 mg N m-2 d-1, respectively. Similarly, assimilatory uptake rates were approximately five times higher in streams surrounded by lowland agricultural areas than in those in higher-elevation, forested areas, with median ± SD of 200 ± 27.1 and 39.1 ± 8.7 mg N m-2 d-1, respectively. Therefore, restoration strategies targeting lowland agricultural areas may have greater potential for increasing nitrate retention. The study also found that restoring stream sinuosity could increase net nitrate retention efficiency by up to 25.4 ± 5.3%, with greater effects seen in small streams. These results suggest that restoration efforts should consider augmenting stream sinuosity to increase nitrate retention and decrease nitrate concentrations at the catchment scale.
Im Rahmen einer explorativen Entwicklung wurde in der vorliegenden Studie ein Konzept zur Wissenschaftskommunikation für ein Graduiertenkolleg, in dem an photochemischen Prozessen geforscht wird, erstellt und anschließend evaluiert. Der Grund dafür ist die immer stärker wachsende Forderung nach Wissenschaftskommunikation seitens der Politik. Es wird darüber hinaus gefordert, dass die Kommunikation der eigenen Forschung in Zukunft integrativer Bestandteil des wissenschaftlichen Arbeitens wird. Um junge Wissenschaftler bereits frühzeitig auf diese Aufgabe vorzubereiten, wird Wissenschaftskommunikation auch in Forschungsverbünden realisiert.
Aus diesem Grund wurde in einer Vorstudie untersucht, welche Anforderungen an ein Konzept zur Wissenschaftskommunikation im Rahmen eines Forschungsverbundes gestellt werden, indem die Einstellung der Doktoranden zur Wissenschaftskommunikation sowie ihre Kommunikationsfähigkeiten anhand eines geschlossenen Fragebogens evaluiert wurden. Darüber hinaus wurden aus den Daten Wissenschaftskommunikationstypen abgeleitet. Auf Grundlage der Ergebnisse wurden unterschiedliche Wissenschaftskommunikationsmaßnahmen entwickelt, die sich in der Konzeption, den Rezipienten, sowie der Form der Kommunikation und den Inhalten unterscheiden.
Im Rahmen dieser Entwicklung wurde eine Lerneinheit mit Bezug auf die Inhalte des Graduiertenkollegs, bestehend aus einem Lehr-Lern-Experiment und den dazugehörigen Begleitmaterialien, konzipiert. Anschließend wurde die Lerneinheit in eine der Wissenschaftskommunikationsmaßnahmen integriert. Je nach Anforderung an die Doktoranden, wurden die Maßnahmen durch vorbereitende Workshops ergänzt.
Durch einen halboffenen Pre-Post-Fragebogen wurde der Einfluss der Wissenschaftskommunikationsmaßnahmen und der dazugehörigen Workshops auf die Selbstwirksamkeit der Doktoranden evaluiert, um Rückschlüsse darauf zu ziehen, wie sich die Wahrnehmung der eigenen Kommunikationsfähigkeiten durch die Interventionen verändert. Die Ergebnisse deuten darauf hin, dass die einzelnen Wissenschaftskommunikationsmaßnahmen die verschiedenen Typen in unterschiedlicher Weise beeinflussen. Es ist anzunehmen, dass es abhängig von der eigenen Einschätzung der Kommunikationsfähigkeit unterschiedliche Bedürfnisse der Förderung gibt, die durch dedizierte Wissenschaftskommunikationsmaßnahmen berücksichtigt werden können.
Auf dieser Grundlage werden erste Ansätze für eine allgemeingültige Strategie vorgeschlagen, die die individuellen Fähigkeiten zur Wissenschaftskommunikation in einem naturwissenschaftlichen Forschungsverbund fördert.
Climate change fundamentally transforms glaciated high-alpine regions, with well-known cryospheric and hydrological implications, such as accelerating glacier retreat, transiently increased runoff, longer snow-free periods and more frequent and intense summer rainstorms. These changes affect the availability and transport of sediments in high alpine areas by altering the interaction and intensity of different erosion processes and catchment properties.
Gaining insight into the future alterations in suspended sediment transport by high alpine streams is crucial, given its wide-ranging implications, e.g. for flood damage potential, flood hazard in downstream river reaches, hydropower production, riverine ecology and water quality. However, the current understanding of how climate change will impact suspended sediment dynamics in these high alpine regions is limited. For one, this is due to the scarcity of measurement time series that are long enough to e.g. infer trends. On the other hand, it is difficult – if not impossible – to develop process-based models, due to the complexity and multitude of processes involved in high alpine sediment dynamics. Therefore, knowledge has so far been confined to conceptual models (which do not facilitate deriving concrete timings or magnitudes for individual catchments) or qualitative estimates (‘higher export in warmer years’) that may not be able to capture decreases in sediment export. Recently, machine-learning approaches have gained in popularity for modeling sediment dynamics, since their black box nature tailors them to the problem at hand, i.e. relatively well-understood input and output data, linked by very complex processes.
Therefore, the overarching aim of this thesis is to estimate sediment export from the high alpine Ötztal valley in Tyrol, Austria, over decadal timescales in the past and future – i.e. timescales relevant to anthropogenic climate change. This is achieved by informing, extending, evaluating and applying a quantile regression forest (QRF) approach, i.e. a nonparametric, multivariate machine-learning technique based on random forest.
The first study included in this thesis aimed to understand present sediment dynamics, i.e. in the period with available measurements (up to 15 years). To inform the modeling setup for the two subsequent studies, this study identified the most important predictors, areas within the catchments and time periods. To that end, water and sediment yields from three nested gauges in the upper Ötztal, Vent, Sölden and Tumpen (98 to almost 800 km² catchment area, 930 to 3772 m a.s.l.) were analyzed for their distribution in space, their seasonality and spatial differences therein, and the relative importance of short-term events. The findings suggest that the areas situated above 2500 m a.s.l., containing glacier tongues and recently deglaciated areas, play a pivotal role in sediment generation across all sub-catchments. In contrast, precipitation events were relatively unimportant (on average, 21 % of annual sediment yield was associated to precipitation events). Thus, the second and third study focused on the Vent catchment and its sub-catchment above gauge Vernagt (11.4 and 98 km², 1891 to 3772 m a.s.l.), due to their higher share of areas above 2500 m. Additionally, they included discharge, precipitation and air temperature (as well as their antecedent conditions) as predictors.
The second study aimed to estimate sediment export since the 1960s/70s at gauges Vent and Vernagt. This was facilitated by the availability of long records of the predictors, discharge, precipitation and air temperature, and shorter records (four and 15 years) of turbidity-derived sediment concentrations at the two gauges. The third study aimed to estimate future sediment export until 2100, by applying the QRF models developed in the second study to pre-existing precipitation and temperature projections (EURO-CORDEX) and discharge projections (physically-based hydroclimatological and snow model AMUNDSEN) for the three representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.
The combined results of the second and third study show overall increasing sediment export in the past and decreasing export in the future. This suggests that peak sediment is underway or has already passed – unless precipitation changes unfold differently than represented in the projections or changes in the catchment erodibility prevail and override these trends. Despite the overall future decrease, very high sediment export is possible in response to precipitation events. This two-fold development has important implications for managing sediment, flood hazard and riverine ecology.
This thesis shows that QRF can be a very useful tool to model sediment export in high-alpine areas. Several validations in the second study showed good performance of QRF and its superiority to traditional sediment rating curves – especially in periods that contained high sediment export events, which points to its ability to deal with threshold effects. A technical limitation of QRF is the inability to extrapolate beyond the range of values represented in the training data. We assessed the number and severity of such out-of-observation-range (OOOR) days in both studies, which showed that there were few OOOR days in the second study and that uncertainties associated with OOOR days were small before 2070 in the third study. As the pre-processed data and model code have been made publically available, future studies can easily test further approaches or apply QRF to further catchments.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
The reliance on fossil fuels has resulted in an abnormal increase in the concentration of greenhouse gases, contributing to the global climate crisis. In response, a rapid transition to renewable energy sources has begun, particularly lithium-ion batteries, playing a crucial role in the green energy transformation. However, concerns regarding the availability and geopolitical implications of lithium have prompted the exploration of alternative rechargeable battery systems, such as sodium-ion batteries. Sodium is significantly abundant and more homogeneously distributed in the crust and seawater, making it easier and less expensive to extract than lithium. However, because of the mysterious nature of its components, sodium-ion batteries are not yet sufficiently advanced to take the place of lithium-ion batteries. Specifically, sodium exhibits a more metallic character and a larger ionic radius, resulting in a different ion storage mechanism utilized in lithium-ion batteries. Innovations in synthetic methods, post-treatments, and interface engineering clearly demonstrate the significance of developing high-performance carbonaceous anode materials for sodium-ion batteries. The objective of this dissertation is to present a systematic approach for fabricating efficient, high-performance, and sustainable carbonaceous anode materials for sodium-ion batteries. This will involve a comprehensive investigation of different chemical environments and post-modification techniques as well.
This dissertation focuses on three main objectives. Firstly, it explores the significance of post-synthetic methods in designing interfaces. A conformal carbon nitride coating is deposited through chemical vapor deposition on a carbon electrode as an artificial solid-electrolyte interface layer, resulting in improved electrochemical performance. The interaction between the carbon nitride artificial interface and the carbon electrode enhances initial Coulombic efficiency, rate performance, and total capacity. Secondly, a novel process for preparing sulfur-rich carbon as a high-performing anode material for sodium-ion batteries is presented. The method involves using an oligo-3,4-ethylenedioxythiophene precursor for high sulfur content hard carbon anode to investigate the sulfur heteroatom effect on the electrochemical sodium storage mechanism. By optimizing the condensation temperature, a significant transformation in the materials’ nanostructure is achieved, leading to improved electrochemical performance. The use of in-operando small-angle X-ray scattering provides valuable insights into the interaction between micropores and sodium ions during the electrochemical processes. Lastly, the development of high-capacity hard carbon, derived from 5-hydroxymethyl furfural, is examined. This carbon material exhibits exceptional performance at both low and high current densities. Extensive electrochemical and physicochemical characterizations shed light on the sodium storage mechanism concerning the chemical environment, establishing the material’s stability and potential applications in sodium-ion batteries.
The evaluation of process-oriented cognitive theories through time-ordered observations is crucial for the advancement of cognitive science. The findings presented herein integrate insights from research on eye-movement control and sentence comprehension during reading, addressing challenges in modeling time-ordered data, statistical inference, and interindividual variability. Using kernel density estimation and a pseudo-marginal likelihood for fixation durations and locations, a likelihood implementation of the SWIFT model of eye-movement control during reading (Engbert et al., Psychological Review, 112, 2005, pp. 777–813) is proposed. Within the broader framework of data assimilation, Bayesian parameter inference with adaptive Markov Chain Monte Carlo techniques is facilitated for reliable model fitting. Across the different studies, this framework has shown to enable reliable parameter recovery from simulated data and prediction of experimental summary statistics. Despite its complexity, SWIFT can be fitted within a principled Bayesian workflow, capturing interindividual differences and modeling experimental effects on reading across different geometrical alterations of text. Based on these advancements, the integrated dynamical model SEAM is proposed, which combines eye-movement control, a traditionally psychological research area, and post-lexical language processing in the form of cue-based memory retrieval (Lewis & Vasishth, Cognitive Science, 29, 2005, pp. 375–419), typically the purview of psycholinguistics. This proof-of-concept integration marks a significant step forward in natural language comprehension during reading and suggests that the presented methodology can be useful to develop complex cognitive dynamical models that integrate processes at levels of perception, higher cognition, and (oculo-)motor control. These findings collectively advance process-oriented cognitive modeling and highlight the importance of Bayesian inference, individual differences, and interdisciplinary integration for a holistic understanding of reading processes. Implications for theory and methodology, including proposals for model comparison and hierarchical parameter inference, are briefly discussed.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
The biosecurity individual
(2024)
Discoveries in biomedicine and biotechnology, especially in diagnostics, have made prevention and (self)surveillance increasingly important in the context of health practices. Frederike Offizier offers a cultural critique of the intersection between health, security and identity, and explores how the focus on risk and security changes our understanding of health and transforms our relationship to our bodies. Analyzing a wide variety of texts, from life writing to fiction, she offers a critical intervention on how this shift in the medical gaze produces new paradigms of difference and new biomedically facilitated identities: biosecurity individuals.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
Diese Arbeit zeigt auf, wie historisch und rechtlich eine Ungleichheit zwischen Schwarzen und Weißen in Deutschland gewachsen ist und geht der Frage nach, welche Anforderungen das Verfassungsrecht, die Rechtspraxis und die Politik erfüllen müssen, um sie auszugleichen.
Eingangs wird die Entwicklung des Verbots der rassischen Diskriminierung im internationalen und nationalen Recht dargelegt. Folglich zeichnet die Verfasserin die Diskriminierungsgeschichte von Schwarzen Menschen nach. Zur Überwindung der nach wie vor bestehenden strukturellen Diskriminierung schlägt sie ein positives Recht vor, das sich auf Menschenrechtsstandards und Lösungsansätzen aus Rechtsvergleichen stützt und die Gleichberechtigung von Schwarzen Menschen bewirken soll.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
Development of a CRISPR/Cas gene editing technique for the coccolithophore Chrysotila carterae
(2024)
This study focuses on William Faulkner, whose works explore the demise of the slavery-based Old South during the Civil War in a highly experimental narrative style. Central to this investigation is the analysis of the temporal dimensions of both individual and collective guilt, thus offering a new approach to the often-discussed problem of Faulkner’s portrayal of social decay. The thesis examines how Faulkner re-narrates the legacy of the Old South as a guilt narrative and argues that Faulkner uses guilt in order to corroborate his concept of time and the idea of the continuity of the past. The focus of the analysis is on three of Faulkner’s arguably most important novels: The Sound and the Fury, Absalom, Absalom!, and Go Down, Moses. Each of these novels features a main character deeply overwhelmed by the crimes of the past, whether private, familial, or societal. As a result, guilt is explored both from a domestic as well as a social perspective. In order to show how Faulkner blends past and present by means of guilt, this work examines several methods and motifs borrowed from different fields and genres with which Faulkner narratively negotiates guilt. These include religious notions of original sin, the motif of the ancestral curse prevalent in the Southern Gothic genre, and the psychological concept of trauma. Each of these motifs emphasizes the temporal dimensions of guilt, which are the core of this study, and makes clear that guilt in Faulkner’s work is primarily to be understood as a temporal rather than a moral problem.
Sexualität in der Geschichte
(2024)
Jelena Tomović führt in diesem Band durch die Entwicklungen unserer sexuellen Sprache und Praktiken. Sie zeigt, dass die Art und Weise, wie über Sexualität gesprochen wird, nicht nur ein Spiegelbild, sondern auch ein treibender Faktor für soziale Veränderungen ist. Die Studie stellt die konventionelle Vorstellung von Sexualität in Frage und führt die Lesenden in eine Welt der subtilen Nuancen und kulturellen Veränderungen. Mit kommunikationstheoretischen Ansätzen, dem praxeologischen Ansatz, ihrer sozialkonstruktivistischen Grundannahme und einem klaren Fokus auf Akteur*innen bietet die Autorin eine frische Perspektive auf die Geschichte der Sexualität. Das Buch eröffnet neue Wege für die Erforschung und das Verständnis von Intimität und sozialer Kommunikation.
Die Arbeit „Die Bekämpfung transnationaler Kriminalität im Kontext fragiler Staatlichkeit“ widmet sich dem Phänomen grenzüberschreitend tätiger Akteure der organisierten Kriminalität die den Umstand ausnutzen, dass einige international anerkannte Regierungen nur eine unzureichende Kontrolle über Teile ihres Staatsgebietes ausüben. Es wird untersucht, weshalb der durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, zur Bekämpfung transnationaler Kriminalitätsphänomene im Kontext dieser fragilen Staaten, nicht oder nur defizitär dazu beiträgt die Kriminalitätsphänomene zu bekämpfen.
Nachdem zunächst erörtert wird, was im Rahmen der Untersuchung unter dem Begriff der transnationalen Kriminalität verstanden wird, wird der internationale Rechtsrahmen zur Bekämpfung anhand von fünf beispielhaft ausgewählten transnationalen Kriminalitätsphänomenen beschrieben. Im darauffolgenden Hauptteil der Untersuchung wird der Frage nachgegangen, weshalb dieser durch die internationale Staatengemeinschaft geschaffene Rechtsrahmen, gerade in fragilen Staaten, kaum einen Beitrag dazu leistet solchen Kriminalitätsphänomenen effektiv zu begegnen. Dabei wird festgestellt, dass die Genese des internationalen Rechtsrahmens zu einem Legitimitätsdefizit desselbigen führt. Auch die mangelhafte Berücksichtigung der speziellen Lebensrealitäten, die in vielen fragilen Staaten vorzufinden sind, wirkt sich negativ auf die Durchsetzbarkeit des internationalen Rechtsrahmens aus. Es wird dargelegt, dass unterschiedlich hohe menschenrechtliche Schutzstandards zu Normenkollisionen bei der internationalen Zusammenarbeit der Staaten führen, insbesondere im Rahmen der internationalen Rechtshilfe. Da gerade fragile Staaten häufig durch eine defizitäre menschenrechtliche Situation gekennzeichnet sind, stellt dies konsolidierte Staaten im Rahmen der Zusammenarbeit mit fragilen Staaten öfters vor Herausforderungen. Schließlich wird aufgezeigt, dass auch die extraterritoriale Jurisdiktion und somit die strafrechtliche Verfolgung transnationaler Straftaten durch Drittstaaten mit rechtlichen und praktischen Problemen einhergeht.
In einem letzten Kapitel der Arbeit wird der Frage nachgegangen, ob nicht ein alternativer Strafverfolgungsmechanismus geschaffen werden sollte, um transnationale Kriminalitätsphänomene zu verfolgen, die aus fragilen Staaten heraus begangen werden und wie ein solch alternativer Strafverfolgungsmechanismus konkret ausgestaltet sein sollte.
Digital Fashion
(2024)
Das virtuelle Kleid als mediale und soziokulturelle Alltagserscheinung der Gegenwart bildet den Gegenstand der vorliegenden interdisziplinären Unter-suchung. An der Schnittstelle zwischen Menschen, Medien und Mode ist das virtuelle Kleid an unrealen Orten und in synthetischen Situationen ausschließlich auf einem Screen erfahrbar. In diesem Dispositiv lassen sich Körperkonzepte, Darstellungskonventionen, soziale Handlungsmuster und Kommunikations-strategien ausmachen, die zwar auf einer radikalen Ablösung vom textilen Material beruhen, aber dennoch nicht ohne sehr konkrete Verweise auf das textile Material auskommen. Dies führt zu neuen Ansätzen der Auseinandersetzung mit Kleidern, die nun als Visualisierung gebündelter Datenpakete zu betrachten sind. Die dynamische Entwicklung neuer Erscheinungsformen und deren nahtlose Einbindung in traditionelle Geschäftsmodelle und bestehende Modekonzepte macht eine Positionsbestimmung notwendig, insbesondere im Hinblick auf gegenwärtige Nachhaltigkeitsdiskurse um immaterielle Produkte. Für diese Studie liefern die hinter den Bildern liegenden Prozesse der ökonomischen Ausrichtung, der Herstellung, der Verwendung und der Rezeption den methodologischen Zugang für die Analyse. Mithilfe eines typologisierenden Instrumentariums wird aus der Vielzahl und Vielfalt der Darstellungen ein Set an forschungsleitenden Beispielen zusammengestellt, welche dann in einer mehrstufigen Kontextanalyse zu einer begrifflichen Fassung des virtuellen Kleides sowie zu fünf Kontexteinheiten führen. Am Beispiel des virtuellen Kleides zeichnet diese Untersuchung den technischen, gesellschaftlichen und sozialen Wandel nach und arbeitet seine Bedeutung für zukünftige Modeentwicklungen heraus. Damit leistet die Untersuchung einen Beitrag zur medien- und sozialwissenschaftlichen Modeforschung der Gegenwart.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
The present dissertation investigates changes in lingual coarticulation across childhood in German-speaking children from three to nine years of age and adults. Coarticulation refers to the mismatch between the abstract phonological units and their seemingly commingled realization in continuous speech. Being a process at the intersection of phonology and phonetics, addressing its changes across childhood allows for insights in speech motor as well as phonological developments. Because specific predictions for changes in coarticulation across childhood can be derived from existing speech production models, investigating children’s coarticulatory patterns can help us model human speech production.
While coarticulatory changes may shed light on some of the central questions of speech production development, previous studies on the topic were sparse and presented a puzzling picture of conflicting findings. One of the reasons for this lack is the difficulty in articulatory data acquisition in a young population. Within the research program this dissertation is embedded in, we accepted this challenge and successfully set up the hitherto largest corpus of articulatory data from children using ultrasound tongue imaging. In contrast to earlier studies, a high number of participants in tight age cohorts across a wide age range and a thoroughly controlled set of pseudowords allowed for statistically powerful investigations of a process known as variable and complicated to track.
The specific focus of my studies is on lingual vocalic coarticulation as measured in the horizontal position of the highest point of the tongue dorsum. Based on three studies on a) anticipatory coarticulation towards the left, b) carryover coarticulation towards the right side of the utterance, and c) anticipatory coarticulatory extent in repeated versus read aloud speech, I deduct the following main theses:
1. Maturing speech motor control is responsible for some developmental changes in coarticulation.
2. Coarticulation can be modeled as the coproduction of articulatory gestures.
3. The developmental change in coarticulation results from a decrease of vocalic activation width.
Beerdigen oder verbrennen?
(2024)
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
The urban heat island (UHI) effect, describing an elevated temperature of urban areas compared with their natural surroundings, can expose urban dwellers to additional heat stress, especially during hot summer days. A comprehensive understanding of the UHI dynamics along with urbanization is of great importance to efficient heat stress mitigation strategies towards sustainable urban development. This is, however, still challenging due to the difficulties of isolating the influences of various contributing factors that interact with each other. In this work, I present a systematical and quantitative analysis of how urban intrinsic properties (e.g., urban size, density, and morphology) influence UHI intensity.
To this end, we innovatively combine urban growth modelling and urban climate simulation to separate the influence of urban intrinsic factors from that of background climate, so as to focus on the impact of urbanization on the UHI effect. The urban climate model can create a laboratory environment which makes it possible to conduct controlled experiments to separate the influences from different driving factors, while the urban growth model provides detailed 3D structures that can be then parameterized into different urban development scenarios tailored for these experiments. The novelty in the methodology and experiment design leads to the following achievements of our work.
First, we develop a stochastic gravitational urban growth model that can generate 3D structures varying in size, morphology, compactness, and density gradient. We compare various characteristics, like fractal dimensions (box-counting, area-perimeter scaling, area-population scaling, etc.), and radial gradient profiles of land use share and population density, against those of real-world cities from empirical studies. The model shows the capability of creating 3D structures resembling real-world cities. This model can generate 3D structure samples for controlled experiments to assess the influence of some urban intrinsic properties in question. [Chapter 2]
With the generated 3D structures, we run several series of simulations with urban structures varying in properties like size, density and morphology, under the same weather conditions. Analyzing how the 2m air temperature based canopy layer urban heat island (CUHI) intensity varies in response to the changes of the considered urban factors, we find the CUHI intensity of a city is directly related to the built-up density and an amplifying effect that urban sites have on each other. We propose a Gravitational Urban Morphology (GUM) indicator to capture the neighbourhood warming effect. We build a regression model to estimate the CUHI intensity based on urban size, urban gross building volume, and the GUM indicator. Taking the Berlin area as an example, we show the regression model capable of predicting the CUHI intensity under various urban development scenarios. [Chapter 3]
Based on the multi-annual average summer surface urban heat island (SUHI) intensity derived from Land surface temperature, we further study how urban intrinsic factors influence the SUHI effect of the 5,000 largest urban clusters in Europe. We find a similar 3D GUM indicator to be an effective predictor of the SUHI intensity of these European cities. Together with other urban factors (vegetation condition, elevation, water coverage), we build different multivariate linear regression models and a climate space based Geographically Weighted Regression (GWR) model that can better predict SUHI intensity. By investigating the roles background climate factors play in modulating the coefficients of the GWR model, we extend the multivariate linear model to a nonlinear one by integrating some climate parameters, such as the average of daily maximal temperature and latitude. This makes it applicable across a range of background climates. The nonlinear model outperforms linear models in SUHI assessment as it captures the interaction of urban factors and the background climate. [Chapter 4]
Our work reiterates the essential roles of urban density and morphology in shaping the urban thermal environment. In contrast to many previous studies that link bigger cities with higher UHI intensity, we show that cities larger in the area do not necessarily experience a stronger UHI effect. In addition, the results extend our knowledge by demonstrating the influence of urban 3D morphology on the UHI effect. This underlines the importance of inspecting cities as a whole from the 3D perspective. While urban 3D morphology is an aggregated feature of small-scale urban elements, the influence it has on the city-scale UHI intensity cannot simply be scaled up from that of its neighbourhood-scale components. The spatial composition and configuration of urban elements both need to be captured when quantifying urban 3D morphology as nearby neighbourhoods also cast influences on each other. Our model serves as a useful UHI assessment tool for the quantitative comparison of urban intervention/development scenarios. It can support harnessing the capacity of UHI mitigation through optimizing urban morphology, with the potential of integrating climate change into heat mitigation strategies.
Ausgehend von der Beobachtung, dass die aktuelle Digitalisierungsforschung die Ambivalenz der Digitalisierung zwar erkennt, aber nicht zum Gegenstand ihrer Analysen macht, fokussiert die vorliegende kumulative Dissertation auf die ambivalente Dichotomie aus Potenzialen und Problemen, die mit digitalen Transformationen von Organisationen einhergeht. Entlang von sechs Publikationen wird mit einem systemtheoretischen Blick auf Organisationen die spannungsvolle Dichotomie hinsichtlich dreier ambivalenter Verhältnisse aufgezeigt: Erstens wird in Bezug auf das Verhältnis von Digitalisierung und Postbürokratie deutlich, dass digitale Transformationen das Potenzial aufweisen, postbürokratische Arbeitsweisen zu erleichtern. Parallel ergibt sich das Problem, dass auf Konsens basierende postbürokratische Strukturen Digitalisierungsinitiativen erschweren, da diese auf eine Vielzahl von Entscheidungen angewiesen sind. Zweitens zeigt sich mit Blick auf das ambivalente Verhältnis von Digitalisierung und Vernetzung, dass einerseits organisationsweite Kooperation ermöglicht wird, während sich andererseits die Gefahr digitaler Widerspruchskommunikation auftut. Beim dritten Verhältnis zwischen Digitalisierung und Gender deutet sich das mit neuen digitalen Technologien einhergehende Potenzial für Gender Inklusion an, während zugleich das Problem einprogrammierter Gender Biases auftritt, die Diskriminierungen oftmals verschärfen. Durch die Gegenüberstellung der Potenziale und Probleme wird nicht nur die Ambivalenz organisationaler Digitalisierung analysierbar und verständlich, es stellt sich auch heraus, dass mit digitalen Transformationen einen doppelte Formalisierung einhergeht: Organisationen werden nicht nur mit den für Reformen üblichen Anpassungen der formalen Strukturen konfrontiert, sondern müssen zusätzlich formale Entscheidungen zu Technikeinführung und -beibehaltung treffen sowie formale Lösungen etablieren, um auf unvorhergesehene Potenziale und Probleme reagieren. Das Ziel der Dissertation ist es, eine analytisch generalisierte Heuristik an die Hand zu geben, mit deren Hilfe die Errungenschaften und Chancen digitaler Transformationen identifiziert werden können, während sich parallel ihr Verhältnis zu den gleichzeitig entstehenden Herausforderungen und Folgeproblemen erklären lässt.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
Körper – Karte – Text
(2024)
Rabelais' Pentalogie um die Riesen Gargantua und Pantagruel spiegelt Aspekte des sich verändernden Weltbildes ihrer Entstehungszeit. Diese Studie untersucht auf der Folie der Theorie des Simulakrum Schrift, wie Körpermodellierungen und kartographisches imaginaire durch den Autor als Strategien der Verhüllung verborgener Botschaften eingesetzt werden. Sie zeigt an ausgewählten Beispielen des Quart Livre die Aufweichung der Grenzen von Körper, Karte und Text und deren Durchdringung. Die Metaphorizität des Textes gibt Aufschluss über seine Autoreflexivität und bewirkt eine gleichsam ganzheitliche Lektüreerfahrung. Schließlich avanciert die Fiktion in ihrer Trugbildhaftigkeit als grotesk-sinnlicher Körper und polysemantische Karte zum Welterklärungsmodell, das jedoch erst dechiffriert werden muss.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Die umfangreiche rechtswissenschaftliche Studie befasst sich mit den preußischen Staatskirchenverträgen aus der Zeit der Weimarer Republik. Diese Verträge waren Höhepunkte einer Entwicklung in Richtung größerer Freiheit und Unabhängigkeit der Kirchen vom Staat, die den Vorgängen im Reich und in anderen deutschen Ländern teils entsprach, teils zuwiderlief. Die Entwicklung folgte keiner unverrückbaren Idealvorstellung über das Verhältnis von Staat und Kirche, sondern stellte sich stets als pragmatische Reaktion auf realpolitische Probleme dar. Die Staatskirchenverträge selbst prägten die weiteren Entwicklungen in Ost und West bis zur Gegenwart.
The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km.
The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term.
The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions.
The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.
This dissertation examines the lack of clarity in the scientific literature regarding gender and negotiation performance. It is often claimed that men negotiate better than women, yet it is simultaneously emphasized that results strongly depend on context. Through the use of qualitative methods such as content analysis and critical mixed-methods review, the research question: "Are women truly inferior negotiators compared to men?" is addressed. The study comprises a descriptive and an interpretive part. The descriptive section illuminates various interpretations of gender-specific negotiation theory among citing authors, with 67% arguing for a general superiority of men. However, given the high variance in gender-specific differences, the focus should instead be on the context-dependency of negotiation performance. Generalized statements can be made within contexts, but not across them. In the interpretive section, several factors contributing to this misinterpretation are highlighted, including discrepancies in the definition of negotiation performance and distortions in research communication.. From a scientific perspective, this study underscores the need for a nuanced sociological analysis and warns against the one-sided acceptance of inaccurate scientific interpretations. From a practical standpoint, it amplifies the voices of women affected by biased research paradigms. Overall, the dissertation clarifies the theory of gender-specific negotiation performance and advocates for the elimination of biases in scientific discourse.
With Arctic ground as a huge and temperature-sensitive carbon reservoir, maintaining low ground temperatures and frozen conditions to prevent further carbon emissions that contrib-ute to global climate warming is a key element in humankind’s fight to maintain habitable con-ditions on earth. Former studies showed that during the late Pleistocene, Arctic ground condi-tions were generally colder and more stable as the result of an ecosystem dominated by large herbivorous mammals and vast extents of graminoid vegetation – the mammoth steppe. Characterised by high plant productivity (grassland) and low ground insulation due to animal-caused compression and removal of snow, this ecosystem enabled deep permafrost aggrad-ation. Now, with tundra and shrub vegetation common in the terrestrial Arctic, these effects are not in place anymore. However, it appears to be possible to recreate this ecosystem local-ly by artificially increasing animal numbers, and hence keep Arctic ground cold to reduce or-ganic matter decomposition and carbon release into the atmosphere.
By measuring thaw depth, total organic carbon and total nitrogen content, stable carbon iso-tope ratio, radiocarbon age, n-alkane and alcohol characteristics and assessing dominant vegetation types along grazing intensity transects in two contrasting Arctic areas, it was found that recreating conditions locally, similar to the mammoth steppe, seems to be possible. For permafrost-affected soil, it was shown that intensive grazing in direct comparison to non-grazed areas reduces active layer depth and leads to higher TOC contents in the active layer soil. For soil only frozen on top in winter, an increase of TOC with grazing intensity could not be found, most likely because of confounding factors such as vertical water and carbon movement, which is not possible with an impermeable layer in permafrost. In both areas, high animal activity led to a vegetation transformation towards species-poor graminoid-dominated landscapes with less shrubs. Lipid biomarker analysis revealed that, even though the available organic material is different between the study areas, in both permafrost-affected and sea-sonally frozen soils the organic material in sites affected by high animal activity was less de-composed than under less intensive grazing pressure. In conclusion, high animal activity af-fects decomposition processes in Arctic soils and the ground thermal regime, visible from reduced active layer depth in permafrost areas. Therefore, grazing management might be utilised to locally stabilise permafrost and reduce Arctic carbon emissions in the future, but is likely not scalable to the entire permafrost region.
Arachidonsäurelipoxygenasen (ALOX-Isoformen) sind Lipid-peroxidierenden Enzyme, die bei der Zelldifferenzierung und bei der Pathogenese verschiedener Erkrankungen bedeutsam sind. Im menschlichen Genom gibt es sechs funktionelle ALOX-Gene, die als Einzelkopiegene vorliegen. Für jedes humane ALOX-Gen gibt es ein orthologes Mausgen. Obwohl sich die sechs humanen ALOX-Isoformen strukturell sehr ähnlich sind, unterscheiden sich ihre funktionellen Eigenschaften deutlich voneinander. In der vorliegenden Arbeit wurden vier unterschiedliche Fragestellungen zum Vorkommen, zur biologischen Rolle und zur Evolutionsabhängigkeit der enzymatischen Eigenschaften von Säugetier-ALOX-Isoformen untersucht:
1) Spitzhörnchen (Tupaiidae) sind evolutionär näher mit dem Menschen verwandt als Nagetiere und wurden deshalb als Alternativmodelle für die Untersuchung menschlicher Erkrankungen vorgeschlagen. In dieser Arbeit wurde erstmals der Arachidonsäurestoffwechsel von Spitzhörnchen untersucht. Dabei wurde festgestellt, dass im Genom von Tupaia belangeri vier unterschiedliche ALOX15-Gene vorkommen und die Enzyme sich hinsichtlich ihrer katalytischen Eigenschaften ähneln. Diese genomische Vielfalt, die weder beim Menschen noch bei Mäusen vorhanden ist, erschwert die funktionellen Untersuchungen zur biologischen Rolle des ALOX15-Weges. Damit scheint Tupaia belangeri kein geeigneteres Tiermodel für die Untersuchung des ALOX15-Weges des Menschen zu sein.
2) Entsprechend der Evolutionshypothese können Säugetier-ALOX15-Orthologe in Arachidonsäure-12-lipoxygenierende- und Arachidonsäure-15-lipoxygenierende Enzyme eingeteilt werden. Dabei exprimieren Säugetierspezies, die einen höheren Evolutionsgrad als Gibbons aufweisen, Arachidonsäure-15-lipoxygenierende ALOX15-Orthologe, während evolutionär weniger weit entwickelte Säugetiere Arachidonsäure-12 lipoxygenierende Enzyme besitzen. In dieser Arbeit wurden elf neue ALOX15-Orthologe als rekombinante Proteine exprimiert und funktionell charakterisiert. Die erhaltenen Ergebnisse fügen sich widerspruchsfrei in die Evolutionshypothese ein und verbreitern deren experimentelle Basis. Die experimentellen Daten bestätigen auch das Triadenkonzept.
3) Da humane und murine ALOX15B-Orthologe unterschiedliche funktionelle Eigenschaften aufweisen, können Ergebnisse aus murinen Krankheitsmodellen zur biologischen Rolle der ALOX15B nicht direkt auf den Menschen übertragen werden. Um die ALOX15B-Orthologen von Maus und Mensch funktionell einander anzugleichen, wurden im Rahmen der vorliegenden Arbeit Knock-in Mäuse durch die In vivo Mutagenese mittels CRISPR/Cas9-Technik hergestellt. Diese exprimieren eine humanisierte Mutante (Doppelmutation von Tyrosin603Asparaginsäure+Histidin604Valin) der murinen Alox15b. Diese Mäuse waren lebens- und fortpflanzungsfähig, zeigten aber geschlechtsspezifische Unterschiede zu ausgekreuzten Wildtyp-Kontrolltieren im Rahmen ihre Individualentwicklung.
4) In vorhergehenden Untersuchungen zur Rolle der ALOX15B in Rahmen der Entzündungsreaktion wurde eine antiinflammatorische Wirkung des Enzyms postuliert. In der vorliegenden Arbeit wurde untersucht, ob eine Humanisierung der murinen Alox15b die Entzündungsreaktion in zwei verschiedenen murinen Entzündungsmodellen beeinflusst. Eine Humanisierung der murinen Alox15b führte zu einer verstärkten Ausbildung von Entzündungssymptomen im induzierten Dextran-Natrium-Sulfat-Kolitismodell. Im Gegensatz dazu bewirkte die Humanisierung der Alox15b eine Abschwächung der Entzündungssymptome im Freund‘schen Adjuvans Pfotenödemmodell. Diese Daten deuten darauf hin, dass sich die Rolle der ALOX15B in verschiedenen Entzündungsmodellen unterscheidet.
Volcanic hydrothermal systems are an integral part of most volcanoes and typically involve a heat source, adequate fluid supply, and fracture or pore systems through which the fluids can circulate within the volcanic edifice. Associated with this are subtle but powerful processes that can significantly influence the evolution of volcanic activity or the stability of the near-surface volcanic system through mechanical weakening, permeability reduction, and sealing of the affected volcanic rock. These processes are well constrained for rock samples by laboratory analyses but are still difficult to extrapolate and evaluate at the scale of an entire volcano. Advances in unmanned aircraft systems (UAS), sensor technology, and photogrammetric processing routines now allow us to image volcanic surfaces at the centimeter scale and thus study volcanic hydrothermal systems in great detail. This thesis aims to explore the potential of UAS approaches for studying the structures, processes, and dynamics of volcanic hydrothermal systems but also to develop methodological approaches to uncover secondary information hidden in the data, capable of indicating spatiotemporal dynamics or potentially critical developments associated with hydrothermal alteration. To accomplish this, the thesis describes the investigation of two near-surface volcanic hydrothermal systems, the El Tatio geyser field in Chile and the fumarole field of La Fossa di Vulcano (Italy), both of which are among the best-studied sites of their kind. Through image analysis, statistical, and spatial analyses we have been able to provide the most detailed structural images of both study sites to date, with new insights into the driving forces of such systems but also revealing new potential controls, which are summarized in conceptual site-specific models. Furthermore, the thesis explores methodological remote sensing approaches to detect, classify and constrain hydrothermal alteration and surface degassing from UAS-derived data, evaluated them by mineralogical and chemical ground-truthing, and compares the alteration pattern with the present-day degassing activity. A significant contribution of the often neglected diffuse degassing activity to the total amount of degassing is revealed and constrains secondary processes and dynamics associated with hydrothermal alteration that lead to potentially critical developments like surface sealing. The results and methods used provide new approaches for alteration research, for the monitoring of degassing and alteration effects, and for thermal monitoring of fumarole fields, with the potential to be incorporated into volcano monitoring routines.
Electricity production contributes to a significant share of greenhouse gas emissions in Europe and is thus an important driver of climate change. To fulfil the Paris Agreement, the European Union (EU) needs a rapid transition to a fully decarbonised power production system. Presumably, such a system will be largely based on renewables. So far, many EU countries have supported a shift towards renewables such as solar and wind power using support schemes, but the economic and political context is changing. Renewables are now cheaper than ever before and have become cost-competitive with conventional technologies. Therefore, European policymakers are striving to better integrate renewables into a competitive market and to increase the cost-effectiveness of the expansion of renewables. The first step was to replace previous fixed-price schemes with competitive auctions. In a second step, these auctions have become more technology-open. Finally, some governments may phase out any support for renewables and fully expose them to the competitive power market.
However, such policy changes may be at odds with the need to rapidly expand renewables and meet national targets due to market characteristics and investors’ risk perception. Without support, price risks are higher, and it may be difficult to meet an investor’s income expectations. Furthermore, policy changes across different countries could have unexpected effects if power markets are interconnected and investors able to shift their investments. Finally, in multi-technology auctions, technologies may dominate, which can be a risk for long-term power system reliability. Therefore, in my thesis, I explore the effects of phasing out support policies for renewables, of coordinating these phase-outs across countries, and of using multi-technology designs. I expand the public policy literature about investment behaviour and policy design as well as policy change and coordination, and I further develop an agent-based model.
The main questions of my thesis are what the cost and deployment effects of gradually exposing renewables to market forces would be and how coordination between countries affects investors’ decisions and market prices.. In my three contributions to the academic literature, I use different methods and come to the following results. In the first contribution, I use a conjoint analysis and market simulation to evaluate the effects of phasing out support or reintroducing feed-in tariffs from the perspective of investors. I find that a phase-out leads to investment shifts, either to other still-supported technologies or to other countries that continue to offer support. I conclude that the coordination of policy changes avoids such shifts.. In the second contribution, I integrate the empirically-derived preferences from the first contribution in to an agent-based power system model of two countries to simulate the effects of ending auctions for renewables. I find that this slows the energy transition, and that cross-border effects are relevant. Consequently, continued support is necessary to meet the national renewables targets. In the third contribution, I analyse the outcome of past multi-technology auctions using descriptive statistics, regression analysis as well as case study comparisons. I find that the outcomes are skewed towards single technologies. This cannot be explained by individual design elements of the auctions, but rather results from context-specific and country-specific characteristics. Based on this, I discuss potential implications for long-term power system reliability.
The main conclusions of my thesis are that a complete phase-out of renewables support would slow down the energy transition and thus jeopardize climate targets, and that multi-technology auctions may pose a risk for some countries, especially those that cannot regulate an unbalanced power plant portfolio in the long term. If policymakers decide to continue supporting renewables, they may consider adopting technology-specific auctions to better steer their portfolio. In contrast, if policymakers still want to phase out support, they should coordinate these policy changes with other countries. Otherwise, overall transition costs can be higher, because investment decisions shift to still-supported but more expensive technologies.
Eskalation des Commitments in Wirtschaftsinformatik Projekten: eine kognitiv-affektive Perspektive
(2024)
Projekte im Bereich der Wirtschaftsinformatik (IS-Projekte) sind von zentraler Bedeutung für die Steuerung von Unternehmensstrategien und die Aufrechterhaltung von Wettbewerbsvorteilen, überschreiten jedoch häufig das Budget, sprengen den Zeitrahmen und weisen eine hohe Misserfolgsquote auf. Diese Dissertation befasst sich mit den psychologischen Grundlagen menschlichen Verhaltens - insbesondere Kognition und Emotion - im Zusammenhang mit einem weit verbreiteten Problem im IS-Projektmanagement: der Tendenz, an fehlgehenden Handlungssträngen festzuhalten, auch Eskalation des Commitments (Englisch: “escalation of commitment” - EoC) genannt.
Mit einem kombinierten Forschungsansatz (dem Mix von qualitativen und quantitativen Methoden) untersuche ich in meiner Dissertation die emotionalen und kognitiven Grundlagen der Entscheidungsfindung hinter eskalierendem Commitment zu scheiternden IS-Projekten und deren Entwicklung über die Zeit. Die Ergebnisse eines psychophysiologischen Laborexperiments liefern Belege auf die Vorhersagen bezüglich der Rolle von negativen und komplexen situativen Emotionen der kognitiven Dissonanz Theorie gegenüber der Coping-Theorie und trägt zu einem besseren Verständnis dafür bei, wie sich Eskalationstendenzen während sequenzieller Entscheidungsfindung aufgrund kognitiver Lerneffekte verändern. Mit Hilfe psychophysiologischer Messungen, einschließlich der Daten-Triangulation zwischen elektrodermaler und kardiovaskulärer Aktivität sowie künstliche Intelligenz-basierter Analyse von Gesichtsmikroexpressionen, enthüllt diese Forschung physiologische Marker für eskalierendes Commitment. Ergänzend zu dem Experiment zeigt eine qualitative Analyse text-basierter Reflexionen während der Eskalationssituationen, dass Entscheidungsträger verschiedene kognitive Begründungsmuster verwenden, um eskalierende Verhaltensweisen zu rechtfertigen, die auf eine Sequenz von vier unterschiedlichen kognitiven Phasen schließen lassen.
Durch die Integration von qualitativen und quantitativen Erkenntnissen entwickelt diese Dissertation ein umfassendes theoretisches Model dafür, wie Kognition und Emotion eskalierendes Commitment über die Zeit beeinflussen. Ich schlage vor, dass eskalierendes Commitment eine zyklische Anpassung von Denkmodellen ist, die sich durch Veränderungen in kognitiven Begründungsmustern, Variationen im zeitlichen Kognitionsmodus und Interaktionen mit situativen Emotionen und deren Erwartung auszeichnet. Der Hauptbeitrag dieser Arbeit liegt in der Entflechtung der emotionalen und kognitiven Mechanismen, die eskalierendes Commitment im Kontext von IS-Projekten antreiben. Die Erkenntnisse tragen dazu bei, die Qualität von Entscheidungen unter Unsicherheit zu verbessern und liefern die Grundlage für die Entwicklung von Deeskalationsstrategien. Beteiligte an „in Schieflage geratenden“ IS-Projekten sollten sich der Tendenz auf fehlgeschlagenen Aktionen zu beharren und der Bedeutung der zugrundeliegenden emotionalen und kognitiven Dynamiken bewusst sein.
In der DDR sollte die Rechtsprechung den Zielen der Politik und dem Aufbau sowie der Sicherung des Sozialismus dienen. Zur Verwirklichung dieser Ziele unternahm das SED-Regime insbesondere den Versuch, auf die Ausbildung des juristischen Nachwuchses Einfluss zu nehmen. Die Arbeit untersucht anhand der im Bundesarchiv verwahrten Originalquellen die Anforderungen, die an das juristische Studium in der DDR gestellt wurden, und die Umstände, unter denen die juristische Ausbildung erfolgte. Unter besonderer Berücksichtigung der Auswahl, Aus- und Weiterbildung der Staatsanwälte beleuchtet die Arbeit die sog. »Kaderarbeit« der DDR-Justiz sowie die wesentlichen Zulassungs-, Prüfungs-, Studien- und Weiterbildungsbedingungen. Die Auswertung des überlieferten Archivmaterials führt zu der Erkenntnis, dass die Aus- und Weiterbildung der DDR-Juristen zur Sicherstellung der Ziele der sozialistischen Partei durch eine planmäßige und systematische politisch-ideologische Erziehung und Kontrolle bestimmt war.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Diglossic translanguaging
(2024)
This book examines how German-speaking Jews living in Berlin make sense and make use of their multilingual repertoire. With a focus on lexical variation, the book demonstrates how speakers integrate Yiddish and Hebrew elements into German for indexing belonging and for positioning themselves within the Jewish community. Linguistic choices are shaped by language ideologies (e.g., authenticity, prescriptivism, nostalgia). Speakers translanguage when using their multilingual repertoire, but do so in a diglossic way, using elements from different languages for specific domains
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
Aging is associated with bone loss, which can lead to osteoporosis and high fracture risk. This coincides with the enhanced formation of bone marrow adipose tissue (BMAT), suggesting a negative effect of bone marrow adipocytes on skeletal health. Increased BMAT formation is also observed in pathologies such as obesity, type 2 diabetes and osteoporosis. However, a subset of bone marrow adipocytes forming the constitutive BMAT (cBMAT), arise early in life in the distal skeleton, contain high levels of unsaturated fatty acids and are thought to provide a physiological function. Regulated BMAT (rBMAT) forms during aging and obesity in proximal regions of the bone and contain a large proportion of saturated fatty acids. Paradoxically, BMAT accumulation is also enhanced during caloric restriction (CR), a life-span extending dietary intervention. This indicates, that different types of BMAT can form in response to opposing nutritional stimuli with potentially different functions.
To this end, two types of nutritional interventions, CR and high fat diet (HFD), that are both described to induce BMAT accumulation were carried out. CR markedly increased BMAT formation in the proximal tibia and led to a higher proportion of unsaturated fatty acids, making it similar to the physiological cBMAT. Additionally, proximal and diaphyseal tibia regions displayed higher adiponectin expression. In aged mice, CR was associated with an improved trabecular bone structure. Taken together, these findings demonstrate, that the type of BMAT that forms during CR might provide beneficial effects for local bone stem/progenitor cells and metabolic health. The HFD intervention performed in this thesis showed no effect on BMAT accumulation and bone microstructure. RNA Seq analysis revealed alterations in the composition of the collagen-containing extracellular matrix (ECM).
In order to investigate the effects of glucose homeostasis on osteogenesis, differentiation capacity of immortalized multipotent mesenchymal stromal cells (MSCs) and osteochondrogenic progenitor cells (OPCs) was analyzed. Insulin improved differentiation in both cell types, however, combination of with a high glucose concentration led to an impaired mineralization of the ECM. In the MSCs, this was accompanied by the formation of adipocytes, indicating negative effects of the adipocytes formed during hyperglycemic conditions on mineralization processes. However, the altered mineralization pattern and structure of the ECM was also observed in OPCs, which did not form any adipocytes, suggesting further negative effects of a hyperglycemic environment on osteogenic differentiation.
In summary, the work provided in this thesis demonstrated that differentiation commitment of bone-resident stem cells can be altered through nutrient availability, specifically glucose. Surprisingly, both high nutrient supply, e.g. the hyperglycemic cell culture conditions, and low nutrient supply, e.g. CR, can induce adipogenic differentiation. However, while CR-induced adipocyte formation was associated with improved trabecular bone structure, adipocyte formation in a hyperglycemic cell-culture environment hampered mineralization. This thesis provides further evidence for the existence of different types of BMAT with specific functions.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
The origin and structure of magnetic fields in the Galaxy are largely unknown. What is known is that they are essential for several astrophysical processes, in particular the propagation of cosmic rays. Our ability to describe the propagation of cosmic rays through the Galaxy is severely limited by the lack of observational data needed to probe the structure of the Galactic magnetic field on many different length scales. This is particularly true for modelling the propagation of cosmic rays into the Galactic halo, where our knowledge of the magnetic field is particularly poor.
In the last decade, observations of the Galactic halo in different frequency regimes have revealed the existence of out-of-plane bubble emission in the Galactic halo. In gamma rays these bubbles have been termed Fermi bubbles with a radial extent of ≈ 3 kpc and an azimuthal height of ≈ 6 kpc. The radio counterparts of the Fermi bubbles were seen by both the S-PASS telescopes and the Planck satellite, and showed a clear spatial overlap. The X-ray counterparts of the Fermi bubbles were named eROSITA bubbles after the eROSITA satellite, with a radial width of ≈ 7 kpc and an azimuthal height of ≈ 14 kpc. Taken together, these observations suggest the presence of large extended Galactic Halo Bubbles (GHB) and have stimulated interest in exploring the less explored Galactic halo.
In this thesis, a new toy model (GHB model) for the magnetic field and non-thermal electron distribution in the Galactic halo has been proposed. The new toy model has been used to produce polarised synchrotron emission sky maps. Chi-square analysis was used to compare the synthetic skymaps with the Planck 30 GHz polarised skymaps. The obtained constraints on the strength and azimuthal height were found to be in agreement with the S-PASS radio observations.
The upper, lower and best-fit values obtained from the above chi-squared analysis were used to generate three separate toy models. These three models were used to propagate ultra-high energy cosmic rays. This study was carried out for two potential sources, Centaurus A and NGC 253, to produce magnification maps and arrival direction skymaps. The simulated arrival direction skymaps were found to be consistent with the hotspots of Centaurus A and NGC 253 as seen in the observed arrival direction skymaps provided by the Pierre Auger Observatory (PAO).
The turbulent magnetic field component of the GHB model was also used to investigate the extragalactic dipole suppression seen by PAO. UHECRs with an extragalactic dipole were forward-tracked through the turbulent GHB model at different field strengths. The suppression in the dipole due to the varying diffusion coefficient from the simulations was noted. The results could also be compared with an analytical analogy of electrostatics. The simulations of the extragalactic dipole suppression were in agreement with similar studies carried out for galactic cosmic rays.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
Mindful Eating
(2024)
Maladaptive eating behaviors such as emotional eating, external eating, and loss-of-control eating are widespread in the general population. Moreover, they are associated to adverse health outcomes and well-known for their role in the development and maintenance of eating disorders and obesity (i.e., eating and weight disorders). Eating and weight disorders are associated with crucial burden for individuals as well as high costs for society in general. At the same time, corresponding treatments yield poor outcomes. Thus, innovative concepts are needed to improve prevention and treatment of these conditions.
The Buddhist concept of mindfulness (i.e., paying attention to the present moment without judgement) and its delivery via mindfulness-based intervention programs (MBPs) has gained wide popularity in the area of maladaptive eating behaviors and associated eating and weight disorders over the last two decades. Though previous findings on their effects seem promising, the current assessment of mindfulness and its mere application via multi-component MBPs hampers to draw conclusions on the extent to which mindfulness-immanent qualities actually account for the effects (e.g., the modification of maladaptive eating behaviors). However, this knowledge is pivotal for interpreting previous effects correctly and for avoiding to cause harm in particularly vulnerable groups such as those with eating and weight disorders.
To address these shortcomings, recent research has focused on the context-specific approach of mindful eating (ME) to investigate underlying mechanisms of action. ME can be considered a subdomain of generic mindfulness describing it specifically in relation to the process of eating and associated feelings, thoughts, and motives, thus including a variety of different attitudes and behaviors. However, there is no universal operationalization and the current assessment of ME suffers from different limitations. Specifically, current measurement instruments are not suited for a comprehensive assessment of the multiple facets of the construct that are currently discussed as important in the literature. This in turn hampers comparisons of different ME facets which would allow to evaluate their particular effect on maladaptive eating behaviors. This knowledge is needed to tailor prevention and treatment of associated eating and weight disorders properly and to explore potential underlying mechanisms of action which have so far been proposed mainly on theoretical grounds.
The dissertation at hand aims to provide evidence-based fundamental research that contributes to our understanding of how mindfulness, more specifically its context-specific form of ME, impacts maladaptive eating behaviors and, consequently, how it could be used appropriately to enrich the current prevention and treatment approaches for eating and weight disorders in the future.
Specifically, in this thesis, three scientific manuscripts applying several qualitative and quantitative techniques in four sequential studies are presented. These manuscripts were published in or submitted to three scientific peer-reviewed journals to shed light on the following questions:
I. How can ME be measured comprehensively and in a reliable and valid way to advance the understanding of how mindfulness works in the context of eating?
II. Does the context-specific construct of ME have an advantage over the generic concept in advancing the understanding of how mindfulness is related to maladaptive eating behaviors?
III. Which ME facets are particularly useful in explaining maladaptive eating behaviors?
IV. Does training a particular ME facet result in changes in maladaptive eating behaviors?
To answer the first research question (Paper 1), a multi-method approach using three subsequent studies was applied to develop and validate a comprehensive self-report instrument to assess the multidimensional construct of ME - the Mindful Eating Inventory (MEI). Study 1 aimed to create an initial version of the MEI by following a three-step approach: First, a comprehensive item pool was compiled by including selected and adapted items of the existing ME questionnaires and supplementing them with items derived from an extensive literature review. Second, the preliminary item pool was complemented and checked for content validity by experts in the field of eating behavior and/or mindfulness (N = 15). Third, the item pool was further refined through qualitative methods: Three focus groups comprising laypersons (N = 16) were used as a check for applicability. Subsequently, think-aloud protocols (N = 10) served as a last check of comprehensibility and elimination of ambiguities.
The resulting initial MEI version was tested in Study 2 in an online convenience sample (N = 828) to explore its factor structure using exploratory factor analysis (EFA). Results were used to shorten the questionnaire in accordance with qualitative and quantitative criteria yielding the final MEI version which encompasses 30 items. These items were assigned to seven ME facets: (1) ‘Accepting and Non-attached Attitude towards one’s own eating experience’ (ANA), (2) ‘Awareness of Senses while Eating’ (ASE), (3) ‘Eating in Response to awareness of Fullness‘ (ERF), (4) ‘Awareness of eating Triggers and Motives’ (ATM), (5) ‘Interconnectedness’ (CON), (6) ‘Non-Reactive Stance’ (NRS) and (7) Focused Attention on Eating’ (FAE).
Study 3 sought to confirm the found facets and the corresponding factor structure in an independent online convenience sample (N = 612) using confirmatory factor analysis (CFA). The study served as further indication of the assumed multidimensionality of ME (the correlational seven-factor model was shown to be superior to a single-factor model). Psychometric properties of the MEI, regarding factorial validity, internal consistency, retest-reliability, and observed criterion validity using a wide range of eating-specific and general health-related outcomes, showed the inventory to be suitable for a comprehensive, reliable and valid assessment of ME. These findings were complemented by demonstrating measurement invariance of the MEI regarding gender. In accordance with the factor structure of the MEI, Paper 1 offers an empirically-derived definition of ME, succeeding in overcoming ambiguities and problems of previous attempts at defining the construct.
To answer the second and third research questions (Paper 2) a subsample of Study 2 from the MEI validation studies (N = 292) was analyzed. Incremental validity of ME beyond generic mindfulness was shown using hierarchical regression models concerning the outcome variables of maladaptive eating behaviors (emotional eating and uncontrolled eating) and nutrition behaviors (consumption of energy-dense food). Multiple regression analyses were applied to investigate the impact of the seven different ME facets (identified in Paper 1) on the same outcome variables. The following ME facets significantly contributed to explaining variance in maladaptive eating and nutrition behaviors: Accepting and Non-attached Attitude towards one`s own eating experience (ANA), Eating in Response to awareness of Fullness (ERF), the Awareness of eating Triggers and Motives (ATM), and a Non-Reactive Stance (NRS, i.e., an observing, non-impulsive attitude towards eating triggers). Results suggest that these ME facets are promising variables to consider when a) investigating potential underlying mechanisms of mindfulness and MBPs in the context of eating and b) addressing maladaptive eating behaviors in general as well as in the prevention and treatment of eating and weight disorders.
To answer the fourth research question (Paper 3), a training on an isolated exercise (‘9 Hunger’) based on the previously identified ME facet ATM was designed to explore its particular association with changes in maladaptive eating behaviors and thus to preliminary explore one possible mechanism of action. The online study was realized using a randomized controlled trial (RCT) design. Latent Change Scores (LCS) across three measurement points (before the training, directly after the training and three months later) were compared between the intervention group (n = 211) and a waitlist control group (n = 188). Short- and longer-term effects of the training could be shown on maladaptive eating behaviors (emotional eating, external eating, loss-of-control eating) and associated outcomes (intuitive eating, ME, self-compassion, well-being). Findings serve as preliminary empirical evidence that MBPs might influence maladaptive eating behaviors through an enhanced non-judgmental awareness of and distinguishment between eating motives and triggers (i.e., ATM). This mechanism of action had previously only been hypothesized from a theoretical perspective. Since maladaptive eating behaviors are associated with eating and weight disorders, the findings can enhance our understanding of the general effects of MBPs on these conditions.
The integration of the different findings leads to several suggestions of how ME might enrich different kinds of future interventions on maladaptive eating behaviors to improve health in general or the prevention and treatment of eating and weight disorders in particular. Strengths of the thesis (e.g., deliberate specific methodology, variety of designs and methods, high number of participants) are emphasized. The main limitations particularly regarding sample characteristics (e.g., higher level of formal education, fewer males, self-selected) are discussed to arrive at an outline for future studies (e.g., including multi-modal-multi-method approaches, clinical eating disorder samples and youth samples) to improve upcoming research on ME and underlying mechanisms of action of MBPs for maladaptive eating behaviors and associated eating and weight disorders.
This thesis enriches current research on mindfulness in the context of eating by providing fundamental research on the core of the ME construct. Thereby it delivers a reliable and valid instrument to comprehensively assess ME in future studies as well as an operational definition of the construct. Findings on ME facet level might inform upcoming research and practice on how to address maladaptive eating behaviors appropriately in interventions. The ME skill ‘Awareness of eating Triggers and Motives (ATM)’ as one particular mechanism of action should be further investigated in representative community and specific clinical samples to examine the validity of the results in these groups and to justify an application of the concept to the general population as well as to subgroups with eating and weight disorders in particular.
In conclusion, findings of the current thesis can be used to set future research on mindfulness, more specifically ME, and its underlying mechanism in the context of eating on a more evidence-based footing. This knowledge can inform upcoming prevention and treatment to tailor MBPs on maladaptive eating behaviors and associated eating and weight disorders appropriately.
Cross-sectional associations of dietary biomarker patterns with health and nutritional status
(2024)
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Heat stress (HS) is a major abiotic stress that negatively affects plant growth and productivity. However, plants have developed various adaptive mechanisms to cope with HS, including the acquisition and maintenance of thermotolerance, which allows them to respond more effectively to subsequent stress episodes. HS memory includes type II transcriptional memory which is characterized by enhanced re-induction of a subset of HS memory genes upon recurrent HS. In this study, new regulators of HS memory in A. thaliana were identified through the characterization of rein mutants.
The rein1 mutant carries a premature stop in CYCLIN-DEPENDENT-KINASE 8 (CDK8) which is part of the cyclin kinase module of the Mediator complex. Rein1 seedlings show impaired type II transcriptional memory in multiple heat-responsive genes upon re-exposure to HS. Additionally, the mutants exhibit a significant deficiency in HS memory at the physiological level. Interaction studies conducted in this work indicate that CDK8 associates with the memory HEAT SHOCK FACTORs HSAF2 and HSFA3. The results suggest that CDK8 plays a crucial role in HS memory in plants together with other memory HSFs, which may be potential targets of the CDK8 kinase function. Understanding the role and interaction network of the Mediator complex during HS-induced transcriptional memory will be an exciting aspect of future HS memory research.
The second characterized mutant, rein2, was selected based on its strongly impaired pAPX2::LUC re-induction phenotype. In gene expression analysis, the mutant revealed additional defects in the initial induction of HS memory genes. Along with this observation, basal thermotolerance was impaired similarly as HS memory at the physiological level in rein2. Sequencing of backcrossed bulk segregants with subsequent fine mapping narrowed the location of REIN2 to a 1 Mb region on chromosome 1. This interval contains the At1g65440 gene, which encodes the histone chaperone SPT6L. SPT6L interacts with chromatin remodelers and bridges them to the transcription machinery to regulate nucleosome and Pol II occupancy around the transcriptional start site. The EMS-induced missense mutation in SPT6L may cause altered HS-induced gene expression in rein2, possibly triggered by changes in the chromatin environment resulting from altered histone chaperone function.
Expanding research on screen-derived factors that modify type II transcriptional memory has the potential to enhance our understanding of HS memory in plants. Discovering connections between previously identified memory factors will help to elucidate the underlying network of HS memory. This knowledge can initiate new approaches to improve heat resilience in crops.
Das Eigene und das Fremde
(2023)
Die vorliegende Arbeit stellt eine Untersuchung des Fremdverstehens von Lehrkräften im Mathematikunterricht dar. Mit ‚Fremdverstehen‘ soll dabei – in Anlehnung an den Soziologen Alfred Schütz – der Prozess bezeichnet werden, in welchem eine Lehrkraft versucht, das Verhalten einer Schülerin oder eines Schülers zu verstehen, indem sie dieses Verhalten auf ein Erleben zurückführt, das ihm zugrunde gelegen haben könnte. Als ein wesentliches Merkmal des Prozesses stellt Schütz in seiner Theorie des Fremdverstehens heraus, dass das Fremdverstehen eines Menschen immer auch auf seinen eigenen Erlebnissen basiert. Aus diesem Grund wird in der Arbeit ein methodischer Zweischritt vorgenommen: Es werden zunächst die mathematikbezogenen Erlebnisse zweier Lehrkräfte nachgezeichnet, bevor dann ihr Fremdverstehen in konkreten Situationen im Mathematikunterricht rekonstruiert wird. In der ersten Teiluntersuchung (= der Rekonstruktion eigener Erlebnisse der untersuchten Lehrkräfte) erfolgt die Datenerhebung mit Hilfe biographisch-narrativer Interviews, in denen die untersuchten Lehrkräfte angeregt werden, ihre mathematikbezogene Lebensgeschichte zu erzählen. Die Analyse dieser Interviews wird im Sinne der rekonstruktiven Fallanalyse vorgenommen. Insgesamt führt die erste Teiluntersuchung zu textlichen Darstellungen der rekonstruierten mathematikbezogenen Lebensgeschichte der untersuchten Mathematiklehrkräfte. In der zweiten Teiluntersuchung (= der Rekonstruktion des Fremdverstehens der untersuchten Lehrkräfte) werden dann narrative Interviews geführt, in denen die untersuchten Lehrkräfte von ihrem Fremdverstehen in konkreten Situationen im Mathematikunterricht erzählen. Die Analyse dieser Interviews erfolgt mit Hilfe eines dreischrittigen Analyseverfahrens, welches die Autorin eigens zum Zweck der Rekonstruktion von Fremdverstehen entwickelte. Am Ende dieser zweiten Teiluntersuchung werden sowohl das rekonstruierte Fremdverstehen der Lehrkräfte in verschiedenen Unterrichtssituationen dargestellt als auch Strukturen, die sich in ihrem Fremdverstehen abzeichnen. Mit Hilfe einer theoretischen Verallgemeinerung werden schließlich – auf Basis der Ergebnisse der zweiten Teiluntersuchung – Aussagen über fünf Merkmale des Fremdverstehens von Lehrkräften im Mathematikunterricht im Allgemeinen gewonnen. Mit diesen Aussagen vermag die Arbeit eine erste Beschreibung davon hervorzubringen, wie sich das Phänomen des Fremdverstehens von Lehrkräften im Mathematikunterricht ausgestalten kann.
The African weakly electric fish genus Campylomormyrus includes 15 described species mostly native to the Congo River and its tributaries. They are considered sympatric species, because their distribution area overlaps. These species generate species-specific electric organ discharges (EODs) varying in waveform characteristics, including duration, polarity, and phase number. They exhibit also pronounced divergence in their snout, i.e. the length, thickness, and curvature. The diversifications in these two phenotypical traits (EOD and snout) have been proposed as key factors promoting adaptive radiation in Campylomormyrus. The role of EODs as a pre-zygotic isolation mechanism driving sympatric speciation by promoting assortative mating has been examined using behavioral, genetical, and histological approaches. However, the evolutionary effects of the snout morphology and its link to species divergence have not been closely examined. Hence, the main objective of this study is to investigate the effect of snout morphology diversification and its correlated EOD to better understand their sympatric speciation and evolutionary drivers. Moreover, I aim to utilize the intragenus and intergenus hybrids of Campylomormyrus to better understand trait divergence as well as underlying molecular/genetic mechanisms involved in the radiation scenario. To this end, I utilized three different approaches: feeding behavior analysis, diet assessment, and geometric morphometrics analysis. I performed feeding behavior experiments to evaluate the concept of the phenotype-environment correlation by testing whether Campylomormyrus species show substrate preferences. The behavioral experiments showed that the short snout species exhibits preference to sandy substrate, the long snout species prefers a stone substrate, and the species with intermediate snout size does not exhibit any substrate preference. The experiments suggest that the diverse feeding apparatus in the genus Campylomormyrus may have evolved in adaptation to their microhabitats. I also performed diet assessments of sympatric Campylomormyrus species and a sister genus species (Gnathonemus petersii) with markedly different snout morphologies and EOD using NGS-based DNA metabarcoding of their stomach contents. The diet of each species was documented showing that aquatic insects such as dipterans, coleopterans and trichopterans represent the major diet component. The results showed also that all species are able to exploit diverse food niches in their habitats. However, comparing the diet overlap indices showed that different snout morphologies and the associated divergence in the EOD translated into different prey spectra. These results further support the idea that the EOD could be a ‘magic trait’ triggering both adaptation and reproductive isolation. Geometric morphometrics method was also used to compare the phenotypical shape traits of the F1 intragenus (Campylomormyrus) and intergenus (Campylomormyrus species and Gnathonemus petersii) hybrids relative to their parents. The hybrids of these species were well separated based on the morphological traits, however the hybrid phenotypic traits were closer to the short-snouted species. In addition, the likelihood that the short snout expressed in the hybrids increases with increasing the genetic distance of the parental species. The results confirmed that additive effects produce intermediate phenotypes in F1-hybrids. It seems, therefore, that morphological shape traits in hybrids, unlike the physiological traits, were not expressed straightforward.
Ribosomes decode mRNA to synthesize proteins. Ribosomes, once considered static, executing machines, are now viewed as dynamic modulators of translation. Increasingly detailed analyses of structural ribosome heterogeneity led to a paradigm shift toward ribosome specialization for selective translation. As sessile organisms, plants cannot escape harmful environments and evolved strategies to withstand. Plant cytosolic ribosomes are in some respects more diverse than those of other metazoans. This diversity may contribute to plant stress acclimation. The goal of this thesis was to determine whether plants use ribosome heterogeneity to regulate protein synthesis through specialized translation. I focused on temperature acclimation, specifically on shifts to low temperatures. During cold acclimation, Arabidopsis ceases growth for seven days while establishing the responses required to resume growth. Earlier results indicate that ribosome biogenesis is essential for cold acclimation. REIL mutants (reil-dkos) lacking a 60S maturation factor do not acclimate successfully and do not resume growth. Using these genotypes, I ascribed cold-induced defects of ribosome biogenesis to the assembly of the polypeptide exit tunnel (PET) by performing spatial statistics of rProtein changes mapped onto the plant 80S structure. I discovered that growth cessation and PET remodeling also occurs in barley, suggesting a general cold response in plants. Cold triggered PET remodeling is consistent with the function of Rei-1, a REIL homolog of yeast, which performs PET quality control. Using seminal data of ribosome specialization, I show that yeast remodels the tRNA entry site of ribosomes upon change of carbon sources and demonstrate that spatially constrained remodeling of ribosomes in metazoans may modulate protein synthesis. I argue that regional remodeling may be a form of ribosome specialization and show that heterogeneous cytosolic polysomes accumulate after cold acclimation, leading to shifts in the translational output that differs between wild-type and reil-dkos. I found that heterogeneous complexes consist of newly synthesized and reused proteins. I propose that tailored ribosome complexes enable free 60S subunits to select specific 48S initiation complexes for translation. Cold acclimated ribosomes through ribosome remodeling synthesize a novel proteome consistent with known mechanisms of cold acclimation. The main hypothesis arising from my thesis is that heterogeneous/ specialized ribosomes alter translation preferences, adjust the proteome and thereby activate plant programs for successful cold acclimation.
The Media Privilege under Data Protection Law: Journalism and data protection are fundamentally at odds with each other. In view of current developments, the need for a functioning regulatory concept for both legal positions is probably more important than ever. This conceptual balance is provided by the journalistic exemption in data protection law. The thesis focuses on the scope of the exception. It also identifies existing coherence problems between European and national law.
Investment control has experienced a considerable increase in importance in M&A transactions, as it has been constantly adapted to real economic conditions and successively tightened. The result is a partially inconsistent review regime that contains value contradictions and creates legal uncertainties. In a comprehensive approach, this doctoral thesis analyzes the problems of the current investment control regime and states concrete reform proposals.
The role of biogenic carbonate producers in the evolution of the geometries of carbonate systems has been the subject of numerous research projects. Attempts to classify modern and ancient carbonate systems by their biotic components have led to the discrimination of biogenic carbonate producers broadly into Photozoans, which are characterised by an affinity for warm tropical waters and high dependence on light penetration, and Heterozoans which are generally associated with both cool water environments and nutrient-rich settings with little to no light penetration. These broad categories of carbonate sediment producers have also been recognised to dominate in specific carbonate systems. Photozoans are commonly dominant in flat-topped platforms with steep margins, while Heterozoans generally dominate carbonate ramps. However, comparatively little is known on how these two main groups of carbonate producers interact in the same system and impact depositional geometries responding to changes in environmental conditions such as sea level fluctuation, antecedent slope, sediment transport processes, etc. This thesis presents numerical models to investigate the evolution of Miocene carbonate systems in the Mediterranean from two shallow marine domains: 1) a Miocene flat-topped platform dominated by Photozoans, with a significant component of Hetrozoans in the slope and 2) a Heterozoan distally steepened ramp, with seagrass-influenced (Photozoan) inner ramp. The overarching aim of the three articles comprising this cumulative thesis is to provide a numerical study of the role of Photozoans and Heterozoans in the evolution of carbonate system geometries and how these biotas respond to changes in environmental conditions. This aim was achieved using stratigraphic forward modelling, which provides an approach to quantitatively integrate multi-scale datasets to reconstruct sedimentary processes and products during the evolution of a sedimentary system.
In a Photozoan-dominated carbonate system, such as the Miocene Llucmajor platform in Western Mediterranean, stratigraphic forward modelling dovetailed with a robust set of sensitivity tests reveal how the geometry of the carbonate system is determined by the complex interaction of Heterozoan and Photozoan biotas in response to variable conditions of sea level fluctuation, substrate configuration, sediment transport processes and the dominance of Photozoan over Heterozoan production. This study provides an enhanced understanding of the different carbonate systems that are possible under different ecological and hydrodynamic conditions. The research also gives insight into the roles of different biotic associations in the evolution of carbonate geometries through time and space. The results further show that the main driver of platform progradation in a Llucmajor-type system is the lowstand production of Heterozoan sediments, which form the necessary substratum for Photozoan production.
In Heterozoan systems, sediment production is mainly characterised by high transport deposits, that are prone to redistribution by waves and gravity, thereby precluding the development of steep margins. However, in the Menorca ramp, the occurrence of sediment trapping by seagrass led to the evolution of distal slope steepening. We investigated, through numerical modelling, how such a seagrass-influenced ramp responds to the frequency and amplitude of sea level changes, variable carbonate production between the euphotic and oligophotic zone, and changes in the configuration of the paleoslope. The study reinforces some previous hypotheses and presents alternative scenarios to the established concepts of high-transport ramp evolution. The results of sensitivity experiments show that steep slopes are favoured in ramps that develop in high-frequency sea level fluctuation with amplitudes between 20 m and 40 m. We also show that ramp profiles are significantly impacted by the paleoslope inclination, such that an optimal antecedent slope of about 0.15 degrees is required for the Menorca distally steepened ramp to develop.
The third part presents an experimental case to argue for the existence of a Photozoan sediment threshold required for the development of steep margins in carbonate platforms. This was carried out by developing sensitivity tests on the forward models of the flat-topped (Llucmajor) platform and the distally steepened (Menorca) platform. The results show that models with Photozoan sediment proportion below a threshold of about 40% are incapable of forming steep slopes. The study also demonstrates that though it is possible to develop steep margins by seagrass sediment trapping, such slopes can only be stabilized by the appropriate sediment fabric and/or microbial binding. In the Photozoan-dominated system, the magnitude of slope steepness depends on the proportion of Photozoan sediments in the system. Therefore, this study presents a novel tool for characterizing carbonate systems based on their biogenic components.
Background: The role of fatty acid (FA) intake and metabolism in type 2 diabetes (T2D) incidence is controversial. Some FAs are not synthesised endogenously and, therefore, these circulating FAs reflect dietary intake, for example, the trans fatty acids (TFAs), saturated odd chain fatty acids (OCFAs), and linoleic acid, an n-6 polyunsaturated fatty acids (PUFA). It remains unclear if intake of TFA influence T2D risk and whether industrial TFAs (iTFAs) and ruminant TFAs (rTFAs) exert the same effect. Unlike even chain saturated FAs, the OCFAs have been inversely associated with T2D risk, but this association is poorly understood. Furthermore, the associations of n-6 PUFAs intake with T2D risk are still debated, while delta-5 desaturase (D5D), a key enzyme in the metabolism of PUFAs, has been consistently related to T2D risk. To better understand these relationships, the FA composition in circulating lipid fractions can be used as biomarkers of dietary intake and metabolism. The exploration of TFAs subtypes in plasma phospholipids and OCFAs and n-6 PUFAs within a wide range of lipid classes may give insights into the pathophysiology of T2D.
Aim: This thesis aimed mainly to analyse the association of TFAs, OCFAs and n-6 PUFAs with self-reported dietary intake and prospective T2D risk, using seven types of TFAs in plasma phospholipids and deep lipidomics profiling data from fifteen lipid classes.
Methods: A prospective case-cohort study was designed within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, including all the participants who developed T2D (median follow-up 6.5 years) and a random subsample of the full cohort (subcohort: n=1248; T2D cases: n=820). The main analyses included two lipid profiles. The first was an assessment of seven TFA in plasma phospholipids, with a modified method for analysis of FA with very low abundances. The second lipid profile was derived from a high-throughout lipid profiling technology, which identified 940 distinct molecular species and allowed to quantify OCFAs and PUFAs composition across 15 lipid classes. Delta-5 desaturase (D5D) activity was estimated as 20:4/20:3-ratio. Using multivariable Cox regression models, we examined the associations of TFA subtypes with incident T2D and class-specific associations of OCFA and n-6 PUFAs with T2D risk.
Results: 16:1n-7t, 18:1n-7t, and c9t11-CLA were positively correlated with the intake of fat-rich dairy foods. iTFA 18:1 isomers were positively correlated with margarine. After adjustment for confounders and other TFAs, higher plasma phospholipid concentrations of two rTFAs were associated with a lower incidence of T2D: 18:1n-7t and t10c12-CLA. In contrast, the rTFA c9t11-CLA was associated with a higher incidence of T2D. rTFA 16:1n-7t and iTFAs (18:1n-6t, 18:1n-9t, 18:2n-6,9t) were not statistically significantly associated with T2D risk.
We observed heterogeneous integration of OCFA in different lipid classes, and the contribution of 15:0 versus 17:0 to the total OCFA abundance differed across lipid classes. Consumption of fat-rich dairy and fiber-rich foods were positively and red meat inversely correlated to OCFA abundance in plasma phospholipid classes. In women only, higher abundances of 15:0 in phosphatidylcholines (PC) and diacylglycerols (DG), and 17:0 in PC, lysophosphatidylcholines (LPC), and cholesterol esters (CE) were inversely associated with T2D risk. In men and women, a higher abundance of 15:0 in monoacylglycerols (MG) was also inversely associated with T2D. Conversely, a higher 15:0 concentration in LPC and triacylglycerols (TG) was associated with higher T2D risk in men. Women with a higher concentration of 17:0 as free fatty acids (FFA) also had higher T2D incidence.
The integration of n-6 PUFAs in lipid classes was also heterogeneous. 18:2 was highly abundant in phospholipids (particularly PC), CE, and TG; 20:3 represented a small fraction of FA in most lipid classes, and 20:4 accounted for a large proportion of circulating phosphatidylinositol (PI) and phosphatidylethanolamines (PE). Higher concentrations of 18:2 were inversely associated with T2D risk, especially within DG, TG, and LPC. However, 18:2 as part of MG was positively associated with T2D risk. Higher concentrations of 20:3 in phospholipids (PC, PE, PI), FFA, CE, and MG were linked to higher T2D incidence. 20:4 was unrelated to risk in most lipid classes, except positive associations were observed for 20:4 enriched in FFA and PE. The estimated D5D activities in PC, PE, PI, LPC, and CE were inversely associated with T2D and explained variance of estimated D5D activity by genomic variation in the FADS locus was only substantial in those lipid classes.
Conclusion: The TFAs' conformation is essential in their relationship to diabetes risk, as indicated by plasma rTFA subtypes concentrations having opposite directions of associations with diabetes risk. Plasma OCFA concentration is linked to T2D risk in a lipid class and sex-specific manner. Plasma n-6 PUFA concentrations are associated differently with T2D incidence depending on the specific FA and the lipid class. Overall, these results highlight the complexity of circulating FAs and their heterogeneous association with T2D risk depending on the specific FA structure, lipid class, and sex. My results extend the evidence of the relationship between diet, lipid metabolism, and subsequent T2D risk. In addition, my work generated several potential new biomarkers of dietary intake and prospective T2D risk.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Mitochondria and plastids are organelles with an endosymbiotic origin. During evolution, many genes are lost from the organellar genomes and get integrated in the nuclear genome, in what is known as intracellular/endosymbiotic gene transfer (IGT/EGT). IGT has been reproduced experimentally in Nicotiana tabacum at a gene transfer rate (GTR) of 1 event in 5 million cells, but, despite its centrality to eukaryotic evolution, there are no genetic factors known to influence the frequency of IGT in higher eukaryotes. The focus of this work was to determine the role of different DNA repair pathways of double strand break repair (DSBR) in the integration step of organellar DNA in the nuclear genome during IGT. Here, a CRISPR/Cas9 mutagenesis strategy was implemented in N. tabacum, with the aim of generating mutants in nuclear genes without expected visible phenotypes. This strategy led to the generation of a collection of independent mutants in the LIG4 (necessary for non-homologous end joining, NHEJ) and POLQ genes (necessary for microhomology mediated end joining, MMEJ). Targeting of other DSBR genes (KU70, KU80, RPA1C) generated mutants with unexpectedly strong developmental phenotypes.. These factors have telomeric roles, hinting towards a possible relationship between telomere length, and strength of developmental disruption upon loss of telomere structure in plants. The mutants were made in a genetic background encoding a plastid-encoded IGT reporter, that confers kanamycin resistance upon transfer to the nucleus. Through large scale independent experiments, increased IGT from the chloroplast to the nucleus was observed in lig4 mutants, as well as lines encoding a POLQ gene with a defective polymerase domain (polqΔPol). This shows that NHEJ or MMEJ have a double-sided relationship with IGT: while transferred genes may integrate using either pathway, the presence of both pathways suppresses IGT in wild-type somatic cells, thus demonstrating for the first time the extent on which nuclear genes control IGT frequency in plants. The IGT frequency increases in the mutants are likely mediated by increased availability of double strand breaks for integration. Additionally, kinetic analysis reveals that gene transfer (GT) events accumulate linearly as a function of time spent under antibiotic selection in the experiment, demonstrating that, contrary to what was previously thought, there is no such thing as a single GTR in somatic IGT experiments. Furthermore, IGT in tissue culture experiments appears to be the result of a "race against the clock" for integration in the nuclear genome, that starts when the organellar DNA arrives to the nucleus granting transient antibiotic resistance. GT events and escapes of kanamycin selection may be two possible outcomes from this race: those instances where the organellar DNA gets to integrate are recovered as GT events, and in those cases where timely integration fails, antibiotic resistance cannot be sustained, and end up considered as escapes. In the mutants, increased opportunities for integration in the nuclear genome change the overall ratio between IGT and escape events. The resources generated here are promising starting points for future research: (1) the mutant collection, for the further study of processes that depend on DNA repair in plants (2) the collection of GT lines obtained from these experiments, for the study of the effect of DSBR pathways over integration patterns and stability of transferred genes and (3) the developed CRISPR/Cas9 workflow for mutant generation, to make N. tabacum meet its potential as an attractive model for answering complex biological questions.
The G protein-coupled estrogen receptor (GPER1) is acknowledged as an important mediator of estrogen signaling. Given the ubiquitous expression of GPER1, it is likely that the receptor plays a role in a variety of malignancies, not only in the classic hormonally regulated tissues (e.g., breast, ovary, and prostate), but also in the colon. As colorectal cancer (CRC) is the third most common cancer in both men and women worldwide and environmental factors and dietary habits are important risk factors, it is increasingly recognized that natural and synthetic hormones and their associated receptors might play a role in CRC. Through oral consumption, environmental contaminants with endocrine activity are in contact with the gastrointestinal mucosa, where they might exert their toxic effects. Although GPER1 has been shown to be engaged in physiological and pathophysiological processes, its role in CRC remains poorly understood. Thus, pro- as well as anti-tumorigenic effects are described in the literature. This thesis has uncovered novel roles of GPER1 in mediating major CRC-associated phenotypes in transformed and non-transformed colon cell lines. Exposure to the estrogens 17β-estradiol (E2), bisphenol-A (BPA) and diethylstilbestrol (DES) but also the androgen dihydrotestosterone (DHT) resulted in GPER1-dependent induction of supernumerary centrosomes, whole chromosomal instability (w-CIN) and aneuploidy. Indeed, both knockdown and inhibition of GPER1 attenuated the generation of (xeno)hormone-driven supernumerary centrosomes and karyotype instability. Mechanistically, (xeno)hormone-induced centrosome amplification was associated with transient multipolar mitosis and the generation of so called anaphase “lagging” chromosomes. The results of this thesis propose a GPER1/PKA/AKAP9-pathway in regulating centrosome numbers in colorectal cancer cells and the involvement of the centriolar protein centrin. Remarkably, exposure to (xeno)hormones resulted in atypical enlargement and unexpected phosphorylation of the centriole marker centrin in interphase. These findings provide a novel role for GPER1 in key CRC-prone lesions and shed light on underlying mechanisms that involve GPER1 function in the colon. Elucidating to what extent centrosomal proteins are involved in the GPER1-mediated aneugenic effect will be an important task for future studies. The present study was intended to lay a first foundation to understand the molecular basis and potential risk factors of CRC which might help to reduce the use of laboratory animals. Since numerous animal experiments are conducted in biomedical research, the development of alternative methods is indispensable. The Federal Institute for Risk Assessment (BfR) as the German Center for the Protection of Laboratory Animals (Bf3R) addresses this issue by uncovering underlying mechanisms leading to colorectal cancer as necessary prerequisite in order to develop alternative methods.
Beinahe 30 Jahre lang glänzte die »Völkerfreundschaft« zwischen Kuba und der DDR im öffentlichen Diskurs der SED als Musterbeispiel ihres proletarischen Internationalismus. Doch die Rhetorik täuscht: Besonders in den Anfangsjahren der bilateralen Beziehungen nahmen die deutschen Kader ihre kubanischen »Genossen« in der Karibik als notorische Querschläger wahr, die mit ihrem Aufbegehren gegen den ideologischen Suprematieanspruch des Kremls die Stabilität des Ostblocks gefährdeten.
Anhand bislang unveröffentlichten Quellenmaterials aus deutschen und kubanischen Archiven veranschaulicht Antonia Bihlmayer die Bemühungen der Regierung Walter Ulbrichts, die widerspenstigen Sozialisten in der Karibik auf Moskau auszurichten. Auf politisch-ideologischer, wirtschaftlicher und kulturpolitischer Ebene analysiert ihre Studie zum einen die Charakteristika dieser sozialistischen Zivilisierungsmission. Zum anderen nimmt sie diejenigen Faktoren in den Blick, die dafür ausschlaggebend waren, dass sich diese beiden sozialistischen Enklaven ab Mitte der 1970er Jahre schließlich zu gleichwertigen »Juniorpartnern« der Sowjetunion entwickelten.
Photosynthesis converts light into metabolic energy which fuels plant growth. In nature, many factors influence light availability for photosynthesis on different time scales, from shading by leaves within seconds up to seasonal changes over months. Variability of light energy supply for photosynthesis can limit a plant´s biomass accumulation. Plants have evolved multiple strategies to cope with strongly fluctuation light (FL). These range from long-term optimization of leaf morphology and physiology and levels of pigments and proteins in a process called light acclimation, to rapid changes in protein activity within seconds. Therefore, uncovering how plants deal with FL on different time scales may provide key ideas for improving crop yield. Photosynthesis is not an isolated process but tightly integrates with metabolism through mutual regulatory interactions. We thus require mechanistic understanding of how long-term light acclimation shapes both, dynamic photosynthesis and its interactions with downstream metabolism. To approach this, we analyzed the influence of growth light on i) the function of known rapid photosynthesis regulators KEA3 and VCCN1 in dynamic photosynthesis (Chapter 2-3) and ii) the interconnection of photosynthesis with photorespiration (PR; Chapter 4).
We approached topic (i) by quantifying the effect of different growth light regimes on photosynthesis and photoprotection by using kea3 and vccn1 mutants. Firstly, we found that, besides photosynthetic capacity, the activities of VCCN1 and KEA3 during a sudden high light phase also correlated with growth light intensity. This finding suggests regulation of both proteins by the capacity of downstream metabolism. Secondly, we showed that KEA3 accelerated photoprotective non-photochemical quenching (NPQ) kinetics in two ways: Directly via downregulating the lumen proton concentration and thereby de-activating pH-dependent NPQ, and indirectly via suppressing accumulation of the photoprotective pigment zeaxanthin.
For topic (ii), we analyzed the role of PR, a process which recycles a toxic byproduct of the carbon fixation reactions, in metabolic flexibility in a dynamically changing light environment. For this we employed the mutants hpr1 and ggt1 with a partial block in PR. We characterized the function of PR during light acclimation by tracking molecular and physiological changes of the two mutants. Our data, in contrast to previous reports, disprove a generally stronger physiological relevance of PR under dynamic light conditions. Additionally, the two different mutants showed pronounced and distinct metabolic changes during acclimation to a condition inducing higher photosynthetic activity. This underlines that PR cannot be regarded purely as a cyclic detoxification pathway for 2PG. Instead, PR is highly interconnected with plant metabolism, with GGT1 and HPR1 representing distinct metabolic modulators.
In summary, the presented work provides further insight into how energetic and metabolic flexibility is ensured by short-term regulators and PR during long-term light acclimation.
Der Untersuchungsgegenstand der vorliegenden Arbeit ist die Praxis der Europäischen Bürgerinitiative (EBI) nach Art. 11 Abs. 4 EUV, dem weltweit ersten und einzigen Instrument transnationaler, partizipativer und digitaler Demokratie. Im Mittelpunkt der Untersuchung steht die Frage, welchen Beitrag die EBI zur weiteren Demokratisierung der EU leisten kann und auf welche Art und Weise insoweit noch weitere Verbesserungen erzielt werden können. Nach zehnjähriger Anwendungspraxis von 2012 bis 2022 liegen inzwischen ausreichend empirische Daten vor, um den Forschungsgegenstand umfassend zu erforschen und das Instrument mit Blick auf seinen von den EU-Institutionen versprochenen Legitimations- und Demokratisierungsbeitrag bewerten zu können. Insbesondere wird das EBI-Verfahren in dieser Arbeit auf seine empirisch nachweisbare Nutzung, auf seine prozedurale Nutzerfreundlichkeit sowie auf seine politische wie rechtliche Wirkmächtigkeit untersucht. Zum Zwecke der korrekten Kategorisierung, Bewertung sowie der nutzerfreundlichen Ausgestaltung des EBI-Verfahrens werden Vergleiche mit Bürger- und Volksinitiativverfahren in den EU-Mitgliedstaaten sowie mit Bürgerbeteiligungsverfahren auf EU-Ebene vorgenommen. Den empirischen und komparativen Analysen werden eine historische Analyse über die Genese der EBI seit dem EU-Verfassungskonvent sowie theoretisch-normative Überlegungen und praktische Untersuchungen zu unterschiedlichen beteiligungszentrierten Demokratiemodellen vorangestellt, um die EBI einzuordnen und die Steigerungsmöglichkeiten ihres Demokratisierungsbeitrags zu erschließen. Letzteres zielt schließlich auf die Frage nach der prozeduralen Kombination und Kompatibilität der EBI mit demokratischen Innovationen aus dem Bereich der deliberativen und direkten Demokratie ab. Die Arbeit schließt mit einem Ausblick und unterbreitet umfassende EBI-Reformoptionen sowohl auf der primär- und sekundärrechtlichen als auch auf der informellen Ebene.
Funken
(2023)
Bereits im vorschulischen Bereich, aber vor allem in der Grundschule entwickeln Kinder wichtige Kompetenzen für spätere Bildungsabschlüsse. Doch die Kompe-tenzunterschiede zwischen den Schüler:innen sind bereits zu Beginn der Grund-schulzeit beträchtlich. Somit kommt den Lehrkräften die überaus wichtige Aufga-be zu allen Kindern den für sie besten Bildungsweg zu ermöglichen. Um dieser Herausforderung zu begegnen, müssen Diagnostik und Förderung im Unterricht Hand in Hand gehen. Deshalb wird die Diagnosekompetenz von Lehrkräften als wichtige Voraussetzung für gelingenden Unterricht angesehen. Diese Dissertation widmet sich nun eben dieser wichtigen Kompetenz. Dabei wird sie als mehrdimen-sionales Konstrukt angesehen, zu dem neben der Beurteilung von fachlichen Kompetenzen auch die Einschätzung des Leistungsstandes und die Schlussfolge-rung hinsichtlich notwendiger Förderung im Unterricht gehören. Anhand dreier Artikel sowie ergänzender theoretischer Betrachtungen wurde die Diagnosekompe-tenz hinsichtlich möglicher Einflussfaktoren, der Bedeutung für den Unterricht sowie für die Lehrer:innenbildung untersucht.
De/lirios
(2023)
En base al concepto de "de/lirio", que articula trastornos en la enunciación literaria en primera persona, por un lado, con la caracterización psicopatológica de este yo enunciativo, por otro lado, el estudio explora las "líricas desviadas" de Mario Levrero y Alberto Laiseca, y muestra como responden productivamente a problemáticas estéticas, éticas y ontológicas propias de la vuelta del milenio, en el Río de la Plata y más allá.
Background: The characteristics of osteoporosis are decreased bone mass and destruction towards the microarchitecture of bone tissue, which raises the risk of fracture. Psychosocialstress and osteoporosis are linked by sympathetic nervous system, hypothalamic-pituitary-adrenal axis, and other endocrine factors. Psychosocial stress causes a series of effects on the organism, and this long-term depletion at the cellular level is considered to be mitochondrial allostatic load, including mitochondrial dysfunction and oxidative stress. Extracellular vesicles (EVs) are involved in the mitochondrial allostatic load process and may as biomarkers in this setting. As critical participants during cell-to-cell communications, EVs serve as transport vehicles for nucleic acid and proteins, alter the phenotypic and functional characteristics of their target cells, and promote cell-to-cell contact. And hence, they play a significant role in the diagnosis and therapy of many diseases, such as osteoporosis.
Aim: This narrative review attempts to outline the features of EVs, investigate their involvement in both psychosocial stress and osteoporosis, and analyze if EVs can be potential mediators between both.
Methods: The online database from PubMed, Google Scholar, and Science Direct were searched for keywords related to the main topic of this study, and the availability of all the selected studies was verified. Afterward, the findings from the articles were summarized and synthesized.
Results: Psychosocial stress affects bone remodeling through increased neurotransmitters such as glucocorticoids and catecholamines, as well as increased glucose metabolism. Furthermore, psychosocial stress leads to mitochondrial allostatic load, including oxidative stress, which may affect bone remodeling. In vitro and in vivo data suggest EVs might involve in the link between psychosocial stress and bone remodeling through the transfer of bioactive substances and thus be a potential mediator of psychosocial stress leading to osteoporosis.
Conclusions: According to the included studies, psychosocial stress affects bone remodeling, leading to osteoporosis. By summarizing the specific properties of EVs and the function of EVs in both psychosocial stress and osteoporosis, respectively, it has been demonstrated that EVs are possible mediators of both, and have the prospects to be useful in innovative research areas.
The light reactions of photosynthesis are carried out by a series of multiprotein complexes embedded in thylakoid membranes. Among them, photosystem I (PSI), acting as plastocyanin-ferderoxin oxidoreductase, catalyzes the final reaction. Together with light-harvesting antenna I, PSI forms a high-molecular-weight supercomplex of ~600 kDa, consisting of eighteen subunits and nearly two hundred co-factors. Assembly of the various components into a functional thylakoid membrane complex requires precise coordination, which is provided by the assembly machinery. Although this includes a small number of proteins (PSI assembly factors) that have been shown to play a role in the formation of PSI, the process as a whole, as well as the intricacy of its members, remains largely unexplored.
In the present work, two approaches were used to find candidate PSI assembly factors. First, EnsembleNet was used to select proteins thought to be functionally related to known PSI assembly factors in Arabidopsis thaliana (approach I), and second, co-immunoprecipitation (Co-IP) of tagged PSI assembly factors in Nicotiana tabacum was performed (approach II).
Here, the novel PSI assembly factors designated CO-EXPRESSED WITH PSI ASSEMBLY 1 (CEPA1) and Ycf4-INTERACTING PROTEIN 1 (Y4IP1) were identified. A. thaliana null mutants for CEPA1 and Y4IP1 showed a growth phenotype and pale leaves compared with the wild type. Biophysical experiments using pulse amplitude modulation (PAM) revealed insufficient electron transport on the PSII acceptor side. Biochemical analyses revealed that both CEPA1 and Y4IP1 are specifically involved in PSI accumulation in A. thaliana at the post-translational level but are not essential. Consistent with their roles as factors in the assembly of a thylakoid membrane protein complex, the two proteins localize to thylakoid membranes. Remarkably, cepa1 y4ip1 double mutants exhibited lethal phenotypes in early developmental stages under photoautotrophic growth. Finally, co-IP and native gel experiments supported a possible role for CEPA1 and Y4IP1 in mediating PSI assembly in conjunction with other PSI assembly factors (e.g., PPD1- and PSA3-CEPA1 and Ycf4-Y4IP1). The fact that CEPA1 and Y4IP1 are found exclusively in green algae and higher plants suggests eukaryote-specific functions. Although the specific mechanisms need further investigation, CEPA1 and Y4IP1 are two novel assembly factors that contribute to PSI formation.
Biogeochemical analyses of lacustrine environments are well-established methods that allow exploring and understanding complex systems in the lake ecosystem. However, most were conducted in temperate lakes controlled by entirely different physical conditions than in tropical climates. The most important difference between the temperate and tropical lakes is lacking seasonal temperature fluctuations in the latter, which leads to a stable temperature gradient in the water column. Thus, the water column in tropical latitudes generally is void of perturbations that can be seen in their temperate counterparts. Permanent stratification in the water column provides optimal conditions for intact sedimentation. The geochemical processes in the water column and the weathering process in the distinct lithology in the catchment leads to the different biogeochemical characteristic in the sediment. Conducting a biogeochemical study in this lake sediment, especially in the Sediment Water Interface (SWI) helps reveal the sedimentation and diagenetic process records influenced by the internal or external loading. Lake Sentani, the study area, is one of the thousands of lakes in Indonesia and located in the Papua province. This tropical lake has a unique feature, as it consists of four interconnected sub-basins with different water depths. More importantly, its catchment is comprised of various different lithologies. Hence, its lithological characteristics are highly diverse, and range from mafic and ultramafic rocks to clastic sediment and carbonates. Each sub-basin receives a distinct sediment input. Equally important, besides the natural loading, Lake Sentani is also influenced by anthropogenic input. Previous studies have elaborated that there is an increase in population growth rate around the lake which has direct consequences on eutrophication. Considering these factors, the government of The Republic of Indonesia put Lake Sentani on the list of national priority lakes for restoration. This thesis aims to develop a fundamental understanding of Lake Sentani's sedimentary geochemistry and geomicrobiology with a special focus on the effects of different lithologies and anthropogenic pressures in the catchment area. We conducted geochemical and geomicrobiology research on Lake Sentani to meet this objective. We investigated geochemical characteristics in the water column, porewater, and sediment core of the four sub-basins. Additional to direct investigations of the lake itself, we also studied the sediments in the tributary rivers, of which some are ephemeral, as well as the river mouths, as connections between riverine and the lacustrine habitat. The thesis is composed of three main publications about Lake Sentani and supported by several publications that focus on other tropical lakes in Indonesia. The first main publication investigates the geochemical characterization of the water column, porewater, and surface sediment (upper 40-50 cm) from the center of the four sub-basins. It reveals that besides catchment lithology, the water column heavily influences the geochemical characteristics in the lake sediments and their porewater. The findings indicate that water column stratification has a strong influence on overall chemistry. The four sub-basins are very different with regard to their water column chemistry. Based on the physicochemical profiles, especially dissolved oxygen, one sub-basin is oxygenated, one intermediate i.e. just reaches oxygen depletion at the sediment-water interface, and two sub-basins are fully meromictic. However, all four sub-basins share the same surface water chemistry. The structure of the water column creates differences on the patterns of anions and cations in the porewater. Likewise, the distinct differences in geochemical composition between the sub-basins show that the lithology in the catchment affects the geochemical characteristic in the sediment. Overall, water column stratification and particularly bottom water oxygenation strongly influence the overall elemental composition of the sediment and porewater composition. The second publication reveals differences in surface sediment composition between habitats, influenced by lithological variations in the catchment area. The macro-element distribution shows that the geochemical characteristics between habitats are different. Furthermore, the geochemical composition also indicates a distinct distribution between the sub-basins. The geochemical composition of the eastern sub-basin suggests that lithogenic elements are more dominant than authigenic elements. This is also supported by sulfide speciation, particle distribution, and smear slide data. The third publication is a geomicrobiological study of the surface sediment. We compare the geochemical composition of the surface sediment and its microbiological composition and compare the different signals. Next Generation Sequencing (NGS) of the 16S rRNA gene was applied to determine the microbial community composition of the surface sediment from a great number of locations. We use a large number of sampling sites in all four sub-basins as well as in the rivers and river mouths to illustrate the links between the river, the river mouth, and the lake. Rigorous assessment of microbial communities across the diverse Lake Sentani habitats allowed us to study some of these links and report novel findings on microbial patterns in such ecosystems. The main result of the Principal Coordinates Analysis (PCoA) based on microbial community composition highlighted some commonalities but also differences between the microbial community analysis and the geochemical data. The microbial community in rivers, river mouths and sub-basins is strongly influenced by anthropogenic input from the catchment area. Generally, Bacteroidetes and Firmicutes could be an indicator for river sediments. The microbial community in the river is directly influenced by anthropogenic pressure and is markedly different from the lake sediment. Meanwhile, the microbial community in the lake sediment reflects the anoxic environment, which is prevalent across the lake in all sediments below a few mm burial depth. The lake sediments harbour abundant sulfate reducers and methanogens. The microbial communities in sediments from river mouths are influenced by both rivers and lake ecosystems. This study provides valuable information to understand the basic processes that control biogeochemical cycling in Lake Sentani. Our findings are critical for lake managers to accurately assess the uncertainties of the changing environmental conditions related to the anthropogenic pressure in the catchment area. Lake Sentani is a unique study site directly influenced by the different geology across the watershed and morphometry of the four studied basins. As a result of these factors, there are distinct geochemical differences between the habitats (river, river mouth, lake) and the four sub-basins. In addition to geochemistry, microbial community composition also shows differences between habitats, although there are no obvious differences between the four sub-basins. However, unlike sediment geochemistry, microbial community composition is impacted by human activities. Therefore, this thesis will provide crucial baseline data for future lake management.
Im Rahmen dieser Dissertation wurden die erstmaligen Totalsynthesen der Arylnaphthalen-Lignane Alashinol D, Vitexdoin C, Vitrofolal E, Noralashinol C1 und Ternifoliuslignan E vorgestellt. Der Schlüsselschritt der entwickelten Methode, basiert auf einer regioselektiven intramolekularen Photo-Dehydro-Diels-Alder (PDDA)-Reaktion, die mittels UV-Strahlung im Durchflussreaktor durchgeführt wurde. Bei der Synthese der PDDA-Vorläufer (Diarylsuberate) wurde eine Synthesestrategie nach dem Baukastenprinzip verfolgt. Diese ermöglicht die Darstellung asymmetrischer komplexer Systeme aus nur wenigen Grundbausteinen und die Totalsynthese einer Vielzahl an Lignanen. In systematischen Voruntersuchungen konnte zudem die klare Überlegenheit der intra- gegenüber der intermolekularen PDDA-Reaktion aufgezeigt werden. Dabei stellte sich eine Verknüpfung der beiden Arylpropiolester über einen Korksäurebügel, in para-Position, als besonders effizient heraus. Werden asymmetrisch substituierte Diarylsuberate, bei denen einer der endständigen Estersubstituenten durch eine Trimethylsilyl-Gruppe oder ein Wasserstoffatom ersetzt wurde, verwendet, durchlaufen diese Systeme eine regioselektive Cyclisierung und als Hauptprodukt werden Naphthalenophane mit einem Methylester in 3-Position erhalten. Mit Hilfe von umfangreichen Experimenten zur Funktionalisierung der 4-Position, konnte zudem gezeigt werden, dass die Substitution der nucleophilen Cycloallen-Intermediate, während der PDDA-Reaktion, generell durch die Zugabe von N-Halogen-Succinimiden möglich ist. In Anbetracht der geringen Ausbeuten haben diese intermolekularen Abfangreaktionen, jedoch keinen präparativen Nutzen für die Totalsynthesen von Lignanen. Mit dem Ziel die allgemeinen photochemischen Reaktionsbedingungen zu optimieren, wurde erstmalig die triplettsensibilisierte PDDA-Reaktion vorgestellt. Durch die Verwendung von Xanthon als Sensibilisator wurde der Einsatz von effizienteren UVA-Lichtquellen ermöglicht, wodurch die Gefahr einer Photozersetzung durch Überbestrahlung minimiert wurde. Im Vergleich zur direkten Anregung mit UVB-Strahlung, konnten die Ausbeuten mit indirekter Anregung durch einen Photokatalysator signifikant gesteigert werden. Die grundlegenden Erkenntnisse und die entwickelten Synthesestrategien dieser Arbeit, können dazu beitragen zukünftig die Erschließung neuer pharmakologisch interessanter Lignane voranzutreiben.
1 Bisher ist nur die semisynthetische Darstellung von Noralashinol C ausgehend von Hydroxymatairesinol literaturbekannt.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Following the extinction of dinosaurs, the great adaptive radiation of mammals occurred, giving rise to an astonishing ecological and phenotypic diversity of mammalian species. Even closely related species often inhabit vastly different habitats, where they encounter diverse environmental challenges and are exposed to different evolutionary pressures. As a response, mammals evolved various adaptive phenotypes over time, such as morphological, physiological and behavioural ones. Mammalian genomes vary in their content and structure and this variation represents the molecular mechanism for the long-term evolution of phenotypic variation. However, understanding this molecular basis of adaptive phenotypic variation is usually not straightforward.
The recent development of sequencing technologies and bioinformatics tools has enabled a better insight into mammalian genomes. Through these advances, it was acknowledged that mammalian genomes differ more, both within and between species, as a consequence of structural variation compared to single-nucleotide differences. Structural variant types investigated in this thesis - such as deletion, duplication, inversion and insertion, represent a change in the structure of the genome, impacting the size, copy number, orientation and content of DNA sequences. Unlike short variants, structural variants can span multiple genes. They can alter gene dosage, and cause notable gene expression differences and subsequently phenotypic differences. Thus, they can lead to a more dramatic effect on the fitness (reproductive success) of individuals, local adaptation of populations and speciation.
In this thesis, I investigated and evaluated the potential functional effect of structural variations on the genomes of mustelid species. To detect the genomic regions associated with phenotypic variation I assembled the first reference genome of the tayra (Eira barbara) relying on linked-read sequencing technology to achieve a high level of genome completeness important for reliable structural variant discovery. I then set up a bioinformatics pipeline to conduct a comparative genomic analysis and explore variation between mustelid species living in different environments. I found numerous genes associated with species-specific phenotypes related to diet, body condition and reproduction among others, to be impacted by structural variants.
Furthermore, I investigated the effects of artificial selection on structural variants in mice selected for high fertility, increased body mass and high endurance. Through selective breeding of each mouse line, the desired phenotypes have spread within these populations, while maintaining structural variants specific to each line. In comparison to the control line, the litter size has doubled in the fertility lines, individuals in the high body mass lines have become considerably larger, and mice selected for treadmill performance covered substantially more distance. Structural variants were found in higher numbers in these trait-selected lines than in the control line when compared to the mouse reference genome. Moreover, we have found twice as many structural variants spanning protein-coding genes (specific to each line) in trait-selected lines. Several of these variants affect genes associated with selected phenotypic traits. These results imply that structural variation does indeed contribute to the evolution of the selected phenotypes and is heritable.
Finally, I suggest a set of critical metrics of genomic data that should be considered for a stringent structural variation analysis as comparative genomic studies strongly rely on the contiguity and completeness of genome assemblies. Because most of the available data used to represent reference genomes of mammalian species is generated using short-read sequencing technologies, we may have incomplete knowledge of genomic features. Therefore, a cautious structural variation analysis is required to minimize the effect of technical constraints.
The impact of structural variants on the adaptive evolution of mammalian genomes is slowly gaining more focus but it is still incorporated in only a small number of population studies. In my thesis, I advocate the inclusion of structural variants in studies of genomic diversity for a more comprehensive insight into genomic variation within and between species, and its effect on adaptive evolution.
Das menschenrechtliche Prinzip des Non-Refoulement vor den Vertragsorganen der Vereinten Nationen
(2023)
Die Vertragsorgane der Vereinten Nationen können durch die Abstimmung ihrer Praxis Rechtssicherheit schaffen, und zwar sowohl für Betroffene als auch für die Vertragsstaaten. Durch den ständigen Dialog mit den Vertragsstaaten, die Beeinflussung der Vertragsorgane untereinander und das Aufgreifen der Praxis durch andere internationale Akteure lässt sich Gewohnheitsrecht identifizieren. -- Dabei ist das menschenrechtliche Prinzip des Non-Refoulement besonders geeignet, dieses Potenzial der Vertragsorgane zu veranschaulichen. Hierbei handelt es sich um ein Rechtsprinzip, das zwar dem Grunde nach allgemein anerkannt ist, dessen Reichweite im Detail jedoch kontinuierlich umstritten ist. Erstmals wird ein umfassender Überblick über die gesamte Praxis der Vertragsorgane zum Prinzip des Non-Refoulement gegeben. Es wird gezeigt, wie sich die Vertragsorgane sowohl auf prozessualer Ebene als auch bei der Bestimmung des materiellen Schutzbereichs von Refoulementverboten einander annähern.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Neue Wege ins Lehramt
(2023)
Bis zum Jahr 2035 fehlen nach neuesten Prognosen von Klemm (2022) in Deutschland ca. 127.000 Lehrkräfte. Diese große Lücke kann nicht mehr allein durch Lehrkräfte abge-deckt werden, die ein traditionelles Lehramtsstudium absolviert haben. Als Antwort auf den Lehrkräftemangel werden in Schulen in Deutschland daher vermehrt Personen ohne traditio-nelles Lehramtsstudium eingestellt, um die Unterrichtsversorgung zu gewährleisten (KMK, 2022). Nicht-traditionell ausgebildete Lehrkräfte durchlaufen vor ihrer Einstellung in den Schuldienst in der Regel ein alternatives Qualifizierungsprogramm. Diese Qualifizierungs-programme sind jedoch in ihrer zeitlichen und inhaltlichen Ausgestaltung sehr heterogen und setzen unterschiedliche Eingangsvoraussetzungen der Bewerber:innen voraus (Driesner & Arndt, 2020). Sie sind in der Regel jedoch deutlich kürzer als traditionelle Lehramtsstudien-gänge an Hochschulen und Universitäten, um einen schnellen Einstieg in den Schuldienst zu gewährleisten. Die kürzere Qualifizierung geht damit mit einer geringeren Anzahl an Lern- und Lehrgelegenheiten einher, wie sie in einem traditionellen Lehramtsstudium zu finden wäre. Infolgedessen kann davon ausgegangen werden, dass nicht-traditionell ausgebildete Lehrkräfte weniger gut auf die Anforderungen des Lehrberufs vorbereitet sind.
Diese Annahme wird auch oft in der Öffentlichkeit vertreten und die Kritik an alternati-ven Qualifizierungsprogrammen ist groß. So äußerte sich beispielsweise der Präsident des Deutschen Lehrerverbandes, Heinz-Peter Meidinger, im Jahr 2019 gegenüber der Zeitung „Die Welt“, dass die unzureichende Qualifizierung von Quereinsteiger:innen „ein Verbre-chen an den Kindern“ sei (Die Welt, 2019). Die Forschung im deutschsprachigen Raum, die in der Läge wäre, belastbare Befunde für die Unterstützung dieser Kritik liefern zu können, steht jedoch noch am Anfang. Erste Arbeiten weisen generell auf wenige Unterschiede zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin (Kleickmann & An-ders, 2011; Kunina-Habenicht et al., 2013; Oettinghaus, Lamprecht & Korneck, 2014). Ar-beiten, die Unterschiede finden, zeigen diese vor allem im Bereich des pädagogischen Wis-sens zuungunsten der nicht-traditionell ausgebildeten Lehrkräfte. Die Frage nach weiteren Unterschieden, beispielsweise in der Unterrichtsqualität oder im beruflichen Wohlbefinden, ist bislang jedoch für den deutschen Kontext nicht beantwortet worden.
Die vorliegende Arbeit hat zum Ziel, einen Teil dieser Forschungslücken zu schließen. Sie bearbeitet in diesem Zusammenhang im Rahmen von drei Teilstudien die Fragen nach Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin-sichtlich ihrer professionellen Kompetenz, Berufswahlmotivation, Wohlbefinden und Unter-richtsqualität. Die übergeordnete Fragestellung wird vor dem Hintergrund des theoretischen Modells zu den Determinanten und Konsequenzen der professionellen Kompetenz (Kunter, Kleickmann, Klusmann & Richter, 2011) bearbeitet. Dieses Modell wird auch für die theore-tische Aufarbeitung der bereits bestehenden nationalen und internationalen Forschungsarbei-ten zu Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften herangezogen.
Teilstudie I untersucht zunächst Unterschiede in der professionellen Kompetenz zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften. Nach dem Kompetenz-modell nach Baumert und Kunter (2006) werden die beiden Gruppen in den vier Aspekten professioneller Kompetenz – Professionswissen, Überzeugungen, motivationale Orientierun-gen und selbstregulative Fähigkeiten – verglichen. Im Fokus dieser Arbeit stehen traditionell ausgebildete Lehramtsanwärter:innen und die sogenannten Quereinsteiger:innen während des Vorbereitungsdiensts. Mittels multivariater Kovarianzanalysen wurde eine Sekundärdaten-analyse des Projekts COACTIV-R durchgeführt und Unterschiede analysiert.
Teilstudie II beleuchtet sowohl Determinanten als auch Konsequenzen professioneller Kompetenz. Auf Seiten der Determinanten werden Unterschiede in der Berufswahlmotivati-on zwischen Lehrkräften mit und ohne traditionellem Lehramtsstudium untersucht. Ferner erfolgt die Analyse von Unterschieden im beruflichen Wohlbefinden (emotionale Erschöp-fung, Enthusiasmus) und die Intention, im Beruf zu verbleiben, als Konsequenz professionel-ler Kompetenz. Es erfolgte eine Analyse der Daten aus der Pilotierungsstudie aus dem Jahr 2019 für den Bildungstrend des Instituts für Qualitätsentwicklung im Bildungswesen (IQB). Unterschiede zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften wurden erneut mittels multivariater Kovarianzanalysen berechnet.
Abschließend erfolgte in Teilstudie III die Untersuchung von Unterschieden in der Un-terrichtsqualität zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften als Konsequenz professioneller Kompetenz. Hierzu wurden Daten des IQB-Bildungstrends 2018 im Rahmen einer Sekundäranalyse mithilfe doppelt-latenter Mehrebenenanalysen genutzt. Es wurden die Unterschiede in den Bereichen Abwesenheit von Störungen, kognitive Akti-vierung und Schüler:innenunterstützung betrachtet.
Im finalen Kapitel der vorliegenden Arbeiten werden die zentralen Befunde der drei Teilstudien zusammengefasst und diskutiert. Die Ergebnisse weisen darauf hin, dass sich traditionell und nicht-traditionell ausgebildete Lehrkräfte nur in wenigen der untersuchten Aspekte signifikant voneinander unterscheiden. Nicht-traditionell ausgebildete Lehrkräfte verfügen über weniger pädagogisches Wissen, haben bessere selbstregulative Fähigkeiten und unterscheiden sich nicht in ihren Berufswahlmotiven, ihrem Wohlbefinden und in der Unterrichtsqualität von traditionell ausgebildeten Lehrkräften. Die Ergebnisse öffnen die Tür für die Diskussion der Relevanz des traditionellen Lehramtsstudiums, bieten eine Grundlage bzgl. der Implikationen für weiterführende Forschungsarbeiten und die Bildungspolitik. Die Arbeiten werden abschließend hinsichtlich ihrer Grenzen bewertet.
In Forschungsprogrammen werden zahlreiche Akteure mit unterschiedlichen Hintergründen und fachlichen Expertisen in Einzel- oder Verbundvorhaben vereint, die jedoch weitestgehend unabhängig voneinander durchgeführt werden. Vor dem Hintergrund, dass gesamtgesellschaftliche Herausforderungen wie die globale Erwärmung zunehmend disziplinübergreifende Lösungsansätze erfordern, sollten Vernetzungs- und Transferprozesse in Forschungsprogrammen stärker in den Fokus rücken. Mit der Implementierung einer Begleitforschung kann dieser Forderung Rechnung getragen werden. Begleitforschung unterscheidet sich in ihrer Herangehensweise und ihrer Zielvorstellung von den „üblichen“ Projekten und kann in unterschiedlichen theoretischen Reinformen auftreten. Verkürzt dargestellt agiert sie entweder (1) inhaltlich komplementär zu den jeweiligen Forschungsprojekten, (2) auf einer Metaebene mit Fokus auf die Prozesse im Forschungsprogramm oder (3) als integrierende, synthetisierende Instanz, für die die Vernetzung der Projekte im Forschungsprogramm sowie der Wissenstransfer von Bedeutung sind. Zwar sind diese Formen analytisch in theoretische Reinformen trennbar, in der Praxis ergibt sich in der Regel jedoch ein Mix aus allen dreien.
In diesem Zusammenhang schließt die vorliegende Dissertation als ergänzende Studie an bisherige Ansätze zum methodischen Handwerkszeug der Begleitforschung an und fokussiert auf folgende Fragestellungen: Auf welcher Basis kann die Vernetzung der Akteure in einem Forschungsprogramm durchgeführt werden, um diese effektiv zusammenzubringen? Welche weiteren methodischen Elemente sollten daran ansetzen, um einen Mehrwert zu generieren, der die Summe der Einzelergebnisse des Forschungsprogrammes übersteigt? Von welcher Art kann dann ein solcher Mehrwert sein und welche Rolle spielt dabei die Begleitforschung?
Das erste methodische Element bildet die Erhebung und Aufbereitung einer Ausgangsdatenbasis. Durch eine auf semantischer Analyse basierenden Verschlagwortung projektbezogener Texte lässt sich eine umfassende Datenbasis aus den Inhalten der Forschungsprojekte generieren. Die Schlagwörter werden dabei anhand eines kontrollierten Vokabulars in einem Schlagwortkatalog strukturiert. Parallel dazu werden sie wiederum den jeweiligen Projekten zugeordnet, wodurch diese thematische Merkmale erhalten. Um thematische Überschneidungen zwischen Forschungsprojekten sichtbar und interpretierbar zu machen, beinhaltet das zweite Element Ansätze zur Visualisierung. Dazu werden die Informationen in einen Netzwerkgraphen transferiert, der sowohl alle im Forschungsprogramm involvierten Projekte als auch die identifizierten Schlagwörter in Relation zueinander abbilden kann. So kann zum Beispiel sichtbar gemacht werden, welche Forschungsprojekte sich auf Basis ihrer Inhalte „näher“ sind als andere. Genau diese Information wird im dritten methodischen Element als Planungsgrundlage für unterschiedliche Veranstaltungsformate wie Arbeitstagungen oder Transferwerkstätten genutzt. Das vierte methodische Element umfasst die Synthesebildung. Diese gestaltet sich als Prozess über den gesamten Zeitraum der Zusammenarbeit zwischen Begleitforschung und den weiteren Forschungsprojekten hinweg, da in die Synthese unter anderem Zwischen-, Teil- und Endergebnisse der Projekte einfließen, genauso wie Inhalte aus den unterschiedlichen Veranstaltungen. Letztendlich ist dieses vierte Element auch das Mittel, um aus den integrierten und synthetisierten Informationen Handlungsempfehlungen für zukünftige Vorhaben abzuleiten.
Die Erarbeitung der methodischen Elemente erfolgte im laufenden Prozess des Begleitforschungsprojektes KlimAgrar, welches der vorliegenden Dissertation als Fallbeispiel dient und dessen Hintergründe in der Thematik Klimaschutz und Klimaanpassung in der Landwirtschaft im Text ausführlich erläutert werden.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.