Refine
Year of publication
- 2021 (276) (remove)
Document Type
- Doctoral Thesis (276) (remove)
Is part of the Bibliography
- yes (276) (remove)
Keywords
- Klimawandel (4)
- Spektroskopie (4)
- climate change (4)
- DDR (3)
- Deep Learning (3)
- Politik (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (29)
- Institut für Geowissenschaften (26)
- Institut für Chemie (25)
- Institut für Ernährungswissenschaft (17)
- Historisches Institut (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Informatik und Computational Science (10)
- Wirtschaftswissenschaften (10)
- Öffentliches Recht (10)
Vertragsgestaltung ist ein praktisch sehr relevantes Thema, das wegen der Justizorientierung in der Wissenschaft noch weitgehend stiefmütterlich behandelt wird. In dieser Untersuchung wird, zumindest für die besonders delikate Konstellation bei komplexen Kooperationen des Staats mit Privaten (ÖPP/PPP), Abhilfe geschaffen. Dabei gründet die Analyse auf einer fundierten Typisierung und Charakterisierung der Probleme solcher Projekte. Den theoretischen Rahmen liefert eine effizienzorientierte Studie institutionenökonomischer Ansätze, namentlich der Transaktionskostentheorie und der Prinzipal-Agenten-Theorie, rückversichert über die praxisorientierten Grundregeln der vertraglichen Risikoverteilung. So gelingt es praktische Formulierungsvorschläge für Standardprobleme der Vertragsgestaltung, wie Leistungsbestimmungen, Anpassungsmechanismen, Konfliktbeilegungsregeln, Informationsmechanismen und Kündigungsregeln zu finden. Diese werden auch aus den Erfolgsbedingungen erläutert.
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
Investigation of Sirtuin 3 overexpression as a genetic model of fasting in hypothalamic neurons
(2021)
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
Karl Peters (1904–1998)
(2021)
Dieses Buch zeichnet das Leben und Wirken des bedeutenden Strafrechtswissenschaftlers Karl Peters nach, wobei ein Schwerpunkt auf der Zeit des Nationalsozialismus liegt. Als Staatsanwalt seit 1932 tätig, auf Grund seiner katholischen Konfession erst 1942 zum Ordinarius in Greifswald ernannt, von 1946 bis 1962 Professor in Münster und sodann bis 1972 in Tübingen tätig. Peters‘ Wirken beeindruckt durch seine Bandbreite. Neben einer intensiven Auseinandersetzung mit dem Strafprozess, -vollzugs- und Jugendstrafrecht forschte er in den Bereichen dermKriminologie, Soziologie, Psychologie, Medizin und Pädagogik. Getragen von christlichen Grundanschauungen stellte Peters hohe Anforderungen an sich und den (Straf-)Juristen. Die Beschäftigung mit Justizirrtümern und dem Wiederaufnahmeverfahrensrecht wurde zu seinem Hauptanliegen.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
The trace elements zinc and manganese are essential for human health, especially due to their enzymatic and protein stabilizing functions. If these elements are ingested in amounts exceeding the requirements, regulatory processes for maintaining their physiological concentrations (homeostasis) can be disturbed. Those homeostatic dysregulations can cause severe health effects including the emergence of neurodegenerative disorders such as Parkinson’s disease (PD). The concentrations of essential trace elements also change during the aging process. However, the relations of cause and consequence between increased manganese and zinc uptake and its influence on the aging process and the emergence of the aging-associated PD are still rarely understood. This doctoral thesis therefore aimed to investigate the influence of a nutritive zinc and/or manganese oversupply on the metal homeostasis during the aging process. For that, the model organism Caenorhabditis elegans (C. elegans) was applied. This nematode suits well as an aging and PD model due to properties such as its short life cycle and its completely sequenced, genetically amenable genome. Different protocols for the propagation of zinc- and/or manganese-supplemented young, middle-aged and aged C. elegans were established. Therefore, wildtypes, as well as genetically modified worm strains modeling inheritable forms of parkinsonism were applied. To identify homeostatic and neurological alterations, the nematodes were investigated with different methods including the analysis of total metal contents via inductively-coupled plasma tandem mass spectrometry, a specific probe-based method for quantifying labile zinc, survival assays, gene expression analysis as well as fluorescence microscopy for the identification and quantification of dopaminergic neurodegeneration.. During aging, the levels of iron, as well as zinc and manganese increased.. Furthermore, the simultaneous oversupply with zinc and manganese increased the total zinc and manganese contents to a higher extend than the single metal supplementation. In this relation the C. elegans metallothionein 1 (MTL-1) was identified as an important regulator of metal homeostasis. The total zinc content and the concentration of labile zinc were age-dependently, but differently regulated. This elucidates the importance of distinguishing these parameters as two independent biomarkers for the zinc status. Not the metal oversupply, but aging increased the levels of dopaminergic neurodegeneration. Additionally, nearly all these results yielded differences in the aging-dependent regulation of trace element homeostasis between wildtypes and PD models. This confirms that an increased zinc and manganese intake can influence the aging process as well as parkinsonism by altering homeostasis although the underlying mechanisms need to be clarified in further studies.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
Die Untersuchung befasst sich mit den umstrittenen Grenzen des Betrugs durch Unterlassen und schafft Klarheit für die Praxis, indem die dogmatischen Leitlinien der Rechtsprechung offengelegt werden. Im Zentrum steht dabei die Interpretation der betrugsspezifischen Garantenstellung durch die Judikatur. Nachdem diese sich im Ergebnis nicht mit der vermeintlich vorherrschenden Rechtsquellentrias aus Gesetz, Vertrag und Ingerenz erklären lässt, wird anhand einer eingehenden Durchsicht der gesamten Betrugsrechtsprechung der Vertrauensgedanke als materielles Kriterium herausgearbeitet und konturiert. Ob hiermit die gesetzgeberische Lücke in § 13 Abs. 1 StGB tatsächlich auf angemessene Art geschlossen wurde, wird abschließend kritisch besprochen.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.
Kämpfe mit Marx
(2021)
Die Wiederentdeckung des Marxismus durch die Neue Linke: eine spannungsvolle Geschichte.Schon vor 1968, in den 1950er Jahren, entstand eine Neue Linke. Junge akademische Intellektuelle begannen jenseits der polarisierten traditionellen Arbeiterbewegung nach neuen Anknüpfungspunkten für ein linkes Denken zu suchen. Neu gegründete Theoriezeitschriften wurden dabei zu ihren Organisationskernen. In der Theorie fand die Neue Linke gemeinsame Überzeugungen und verschmolz auch hierdurch zu einer vorgestellten Gemeinschaft. Insbesondere marxistische Theorien wurden wiederentdeckt und neu interpretiert.Mit Marx führte die Neue Linke gemeinsame Kämpfe gegen eine »bürgerliche« Öffentlichkeit - mit Marx trug sie aber auch immer stärker Kämpfe untereinander aus. David Bebnowski nutzt die beiden West-Berliner Zeitschriften »Das Argument« und »PROKLA« als Seismographen und Sonden zur Ergründung der Geschichte der Neuen Linken und des akademischen Marxismus. Dabei wird deutlich, dass »1968« nicht nur für Aufbrüche steht, sondern ebenso zu Spaltungen führte, die die Linke bis heute kennzeichnen.
Digital software platforms such as iOS or Android evolve quickly. Through regular updates, their set of built-in (core) features increases. While innovation allows strengthening platforms amidst competition, it can hurt contributors when introducing core features that are already provided by third-party developers (Platform Coring).
This book addresses the underexplored phenomenon of Platform Coring and provides strategical guidance for platform owners and third-party contributors. Platform owners are well-advised to carefully consider the benefits and risks for their platform ecosystem.
The book contributes by highlighting avenues to employ Platform Coring for the competitive advantage of the platform and ecosystem simultaneously.
Was wäre Brandenburg ohne seine vielen Einwanderer? Ohne die Hugenotten, ohne die Böhmen – und die Schweizer!? Die Zuwanderung von Schweizer Kolonisten nach Brandenburg hat die Migrationsforschung bisher nur marginal wahrgenommen.
Der „Große Kurfürst“ Friedrich Wilhelm hatte sich, nachdem es in der Schweiz Ende des 17. Jahrhunderts zu enormen sozialen Spannungen gekommen war, 1683 an den Bürgermeister und den Rat der Stadt Bern gewandt: Er bat um Überlassung von „Zehen oder Zwantzig Familien“, „welche der Wirthschafft und Viehzucht wohl erfahren seyn“. Die Folgenbewältigung des 30-jährigen Krieges, der die Kurmark vielerorts entvölkert hatte, war für ihn oberstes Staatsziel. Unter den nachgeborenen Schweizer Söhnen fanden sich viele Einwanderwillige, sodass eine Auswahl unter ihnen nötig wurde, „denn es ginge um die Ehre der Schweizerischen Nation.“ Auch heute noch kann man den Stolz dieser kleinen Einwanderungsgruppe in Nattwerder erleben.
Dietmar Bleyl untersucht ihr Schicksal sowohl unter dem wirtschaftlichen Aspekt (bis ins 19. Jahrhundert) als auch unter dem konfessionellen Aspekt (bis 1949) und schließt damit eine Lücke in der bisherigen Forschung.
In C3 plants, CO2 diffuses into the leaf and is assimilated by the Calvin-Benson cycle in the mesophyll cells. It leaves Rubisco open to its side reaction with O2, resulting in a wasteful cycle known as photorespiration. A sharp fall in atmospheric CO2 levels about 30 million years ago have further increased the side reaction with O2. The pressure to reduce photorespiration led, in over 60 plant genera, to the evolution of a CO2-concentrating mechanism called C4 photosynthesis; in this mode, CO2 is initially incorporated into 4-carbon organic acids, which diffuse to the bundle sheath and are decarboxylated to provide CO2 to Rubisco. Some genera, like Flaveria, contain several species that represent different steps in this complex evolutionary process. However, the majority of terrestrial plant species did not evolve a CO2-concentrating mechanism and perform C3 photosynthesis.
This thesis compares photosynthetic metabolism in several species with C3, C4 and intermediate modes of photosynthesis. Metabolite profiling and stable isotope labelling were performed to detect inter-specific differences changes in metabolite profile and, hence, how a pathway operates. The results obtained were subjected to integrative data analyses like hierarchical clustering and principal component analysis, and were deepened by correlation analyses to uncover specific metabolic features and reaction steps that were conserved or differed between species.
The main findings are that Calvin-Benson cycle metabolite profiles differ between C3 and C4 species and between different C3 species, including a very different response to rising irradiance in Arabidopsis and rice. These findings confirm Calvin-Benson cycle operation diverged between C3 and C4 species and, most unexpectedly, even between different C3 species. Moreover, primary metabolic profiles supported the current C4 evolutionary model in the genus Flaveria and also provided new insights and opened up new questions. Metabolite profiles also point toward a progressive adjustment of the Calvin-Benson cycle during the evolution of C4 photosynthesis. Overall, this thesis point out the importance of a metabolite-centric approach to uncover underlying differences in species apparently sharing the same photosynthetic routes and as a valid method to investigate evolutionary transition between C3 and C4 photosynthesis.
Any physical system can be described on the level of interacting particles, thus it is of fundamental importance to improve the scientific understanding of interacting many-body systems. This thesis experimentally addresses specific quasi-particle interactions, namely interactions be- tween electrons and between electrons and phonons. It describes the consequential effects of those processes on the electronic structure and the core-hole relaxation pathways in 3d metals. Despite the great amount of experimental and theoretical studies of these interactions and their impact on the behavior of solid-state matter, there are still open questions concerning the cor- responding physical, chemical and mechanical properties of solid-state matter. Especially, the study of 3d metals and their compounds is a great experimental challenge, since those exhibit a variety of spectral features originating from many-body effects such as multiplet splitting, shake up/off satellites, vibrationally excited states or more complex effects like superconductivity and ultrafast demagnetization. In X-ray spectroscopy, these effects often produce overlapping fea- tures, complicating the analysis and limiting the understanding. In this thesis, to overcome the limitations set by conventional X-ray spectroscopy, two different experimental approaches were successfully refined, namely Auger electron photoelectron coincidence spectroscopy (APECS) and temperature-dependent X-ray emission spectroscopy (tXES), which enabled the separation of different core-hole relaxation pathways and the isolation of the impact of specific many-body interactions in the experimental spectra. APECS was utilized at the new Coincidence electron spectroscopy for chemical analysis (Co- ESCA) station at BESSY II to study the core-hole decay and electron-correlation effects in single- crystal Ni, Cu and Co. The observation of photoelectrons in coincidence with Auger electrons allows for the separation of the initial and final state effects in the Auger electron spectra. The results show that a Cu LV V Auger spectrum can be represented by broadened atomic multiplets confirming the localized nature of the intermediate core-hole states. In contrast, the Co LV V Auger spectrum is band-like and can be represented by the self-convolution of the valence band. Ni behaves mixed, localized and itinerant. Thus, the Ni Auger spectrum can only be represented by a mixture of atomic multiplet peaks and the self-convoluted valence band. In the case of Ni, the LV V Auger electrons in coincidence with the 6 eV satellite photoelectrons were also stud- ied. Utilizing the core-hole clock method, the lifetime of the localized double-hole intermediate 2 p53d9 states of 1.8 fs could be determined. However, a fraction of these states delocalizes before the Auger decay contributing to the main peak. A similar delocalization was observed for the double-hole states produced by the L2L3M4,5 Coster-Kronig process. Additionally, the influence of surface oxidation on the Ni(111) 3p levels was studied with APECS. The Ni 3p PES spectrum is broad and featureless, due to overlapping many-body effects and gives little chance for exact analysis using conventional photoelectron spectroscopy. Utilizing APECS or precisely the final state selectivity of the method, the spectral width of the 3p levels could be narrowed and their positions and the spin-orbit splitting were determined. Moreover, due to the surface sensitivity of the method, the chemically shifted 3p photoelectron peaks originating from the oxidized surface and the bulk Ni were disentangled. For the study of the atomic electron-phonon spin-flip scattering in 3d metals as a spin-relaxation channel, the tXES method at the SolidFlexRIXS station was developed. The atomic spin-flip scat- tering was studied in single-crystal Ni, Cu, Co and in FeNi alloys, which show considerable dif- ferences in their behavior. The scattering rate in Ni increases with temperature, whereas the rate in Cu and Co remains constant within the measured temperature range up to 1000 K. In FeNi alloys, our results reveal that the spin-flip scattering is restricted by sublattice exchange energies J. The electron-phonon scattering driven spin-flips only appear in the case where the thermal energy ex- ceeds the exchange energy kT > J. This thresholding is an important microscopic process for the description of the sublattice dynamics in alloys, but as shown also relevant for elemental magnetic systems. Overall, the results strongly indicate that the spin-flip probability is correlated with the exchange energy, which might become an important parameter in the ultrafast demagnetization debate. Taken together, the applied experimental approaches allowed to study complex many-body effects in 3d metals. The results show that utilizing APECS enabled the distinction and clear assignment of otherwise overlapping features in AES or PES spectra of Ni, Cu, Co and NiO. This is of fundamental importance for the basic understanding of photoionization and core-hole decay processes but also for the chemical analysis in applied science. The measurement of the atomic electron-phonon spin-flip scattering rate utilizing tXES shows that the electron-phonon spin-flip scattering is a relevant atomic process for the macroscopic demagnetization process. Additionally, a temperature-dependent thresholding mechanism was discovered, which introduces an important dynamic factor into the electron-phonon spin-flip model.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.
The evolution of life on Earth has been driven by disturbances of different types and magnitudes over the 4.6 million years of Earth’s history (Raup, 1994, Alroy, 2008). One example for such disturbances are mass extinctions which are characterized by an exceptional increase in the extinction rate affecting a great number of taxa in a short interval of geologic time (Sepkoski, 1986). During the 541 million years of the Phanerozoic, life on Earth suffered five exceptionally severe mass extinctions named the “Big Five Extinctions”. Many mass extinctions are linked to changes in climate
(Feulner, 2009). Hence, the study of past mass extinctions is not only intriguing, but can also provide insights into the complex nature of the Earth system. This thesis aims at deepening our understanding of the triggers of mass extinctions and how they affected life. To accomplish this, I investigate changes in climate during two of the Big Five extinctions using a coupled climate model.
During the Devonian (419.2–358.9 million years ago) the first vascular plants and vertebrates evolved on land while extinction events occurred in the ocean (Algeo et al., 1995). The causes of these formative changes, their interactions and their links to changes in climate are still poorly understood. Therefore, we explore the sensitivity of the Devonian climate to various boundary conditions using an intermediate-complexity climate model (Brugger et al., 2019). In contrast to Le Hir et al. (2011), we find only a minor biogeophysical effect of changes in vegetation cover due to unrealistically high soil albedo values used in the earlier study. In addition, our results cannot support the strong influence of orbital parameters on the Devonian climate, as simulated with a climate model with a strongly simplified ocean model (De Vleeschouwer et al., 2013, 2014, 2017). We can only reproduce the changes in Devonian climate suggested by proxy data by decreasing atmospheric CO2. Still, finding agreement between the evolution of sea surface temperatures reconstructed from proxy data (Joachimski et al., 2009) and our simulations remains challenging and suggests a lower δ18O ratio of Devonian seawater. Furthermore, our study of the sensitivity of the Devonian climate reveals a prevailing mode of climate variability on a timescale of decades to centuries. The quasi-periodic ocean temperature fluctuations are linked to a physical mechanism of changing sea-ice cover, ocean convection and overturning in high northern latitudes.
In the second study of this thesis (Dahl et al., under review) a new reconstruction of atmospheric CO2 for the Devonian, which is based on CO2-sensitive carbon isotope fractionation in the earliest vascular plant fossils, suggests a much earlier drop of atmo- spheric CO2 concentration than previously reconstructed, followed by nearly constant CO2 concentrations during the Middle and Late Devonian. Our simulations for the Early Devonian with identical boundary conditions as in our Devonian sensitivity study (Brugger et al., 2019), but with a low atmospheric CO2 concentration of 500 ppm, show no direct conflict with available proxy and paleobotanical data and confirm that under the simulated climatic conditions carbon isotope fractionation represents a robust proxy for atmospheric CO2. To explain the earlier CO2 drop we suggest that early forms of vascular land plants have already strongly influenced weathering. This new perspective on the Devonian questions previous ideas about the climatic conditions and earlier explanations for the Devonian mass extinctions.
The second mass extinction investigated in this thesis is the end-Cretaceous mass extinction (66 million years ago) which differs from the Devonian mass extinctions in terms of the processes involved and the timescale on which the extinctions occurred. In the two studies presented here (Brugger et al., 2017, 2021), we model the climatic effects of the Chicxulub impact, one of the proposed causes of the end-Cretaceous extinction, for the first millennium after the impact. The light-dimming effect of stratospheric sulfate aerosols causes severe cooling, with a decrease of global annual mean surface air temperature of at least 26◦C and a recovery to pre-impact temperatures after more than 30 years. The sudden surface cooling of the ocean induces deep convection which brings nutrients from the deep ocean via upwelling to the surface ocean. Using an ocean biogeochemistry model we explore the combined effect of ocean mixing and iron-rich dust originating from the impactor on the marine biosphere. As soon as light levels have recovered, we find a short, but prominent peak in marine net primary productivity. This newly discovered mechanism could result in toxic effects for marine near-surface ecosystems. Comparison of our model results to proxy data (Vellekoop et al., 2014, 2016, Hull et al., 2020) suggests that carbon release from the terrestrial biosphere is required in addition to the carbon dioxide which can be attributed to the target material. Surface ocean acidification caused by the addition of carbon dioxide and sulfur is only moderate. Taken together, the results indicate a significant contribution of the Chicxulub impact to the end-Cretaceous mass extinction by triggering multiple stressors for the Earth system.
Although the sixth extinction we face today is characterized by human intervention in nature, this thesis shows that we can gain many insights into future extinctions from studying past mass extinctions, such as the importance of the rate of change (Rothman, 2017), the interplay of multiple stressors (Gunderson et al., 2016), and changes in the carbon cycle (Rothman, 2017, Tierney et al., 2020).
Role of dietary sulfonates in the stimulation of gut bacteria promoting intestinal inflammation
(2021)
The interplay between intestinal microbiota and host has increasingly been recognized as a major factor impacting health. Studies indicate that diet is the most influential determinant affecting the gut microbiota. A diet rich in saturated fat was shown to stimulate the growth of the colitogenic bacterium Bilophila wadsworthia by enhancing the secretion of the bile acid taurocholate (TC). The sulfonated taurine moiety of TC is utilized as a substrate by B. wadsworthia. The resulting overgrowth of B. wadsworthia was accompanied by an increased incidence and severity of colitis in interleukin (IL)-10-deficient mice, which are genetically prone to develop inflammation.
Based on these findings, the question arose whether the intake of dietary sulfonates also stimulates the growth of B. wadsworthia and thereby promotes intestinal inflammation in genetically susceptible mice. Dietary sources of sulfonates include green vegetables and cyanobacteria, which contain the sulfolipids sulfoquinovosyl diacylglycerols (SQDG) in considerable amounts. Based on literature reports, the gut commensal Escherichia coli is able to release sulfoquinovose (SQ) from SQDG and in further steps, convert SQ to 2,3-dihydroxypropane-1-sulfonate (DHPS) and dihydroxyacetone phosphate. DHPS may then be utilized as a growth substrate by B. wadsworthia, which results in the formation of sulfide. Both, sulfide formation and a high abundance of B. wadsworthia have been associated with intestinal inflammation.
In the present study, conventional IL-10-deficient mice were fed either a diet supplemented with the SQDG-rich cyanobacterium Spirulina (20%, SD) or a control diet. In addition SQ, TC, or water were orally applied to conventional or gnotobiotic IL-10-deficient mice. The gnotobiotic mice harbored a simplified human intestinal microbiota (SIHUMI) either with or without B. wadsworthia. During the intervention period, the body weight of the mice was monitored, the colon permeability was assessed and fecal samples were collected. After the three-week intervention, the animals were examined with regard to inflammatory parameters, microbiota composition and sulfonate concentrations in different intestinal sites.
None of the mice treated with the above-mentioned sulfonates showed weight loss or intestinal inflammation. Solely mice fed SD or gavaged with TC displayed a slight immune response. These mice also displayed an altered microbiota composition, which was not observed in mice gavaged with SQ. The abundance of B. wadsworthia was strongly reduced in mice fed SD, while that of mice treated with SQ or TC was in part slightly increased. The intestinal SQ-concentration was elevated in mice orally treated with SD or SQ, whereas neither TC nor taurine concentrations were consistently elevated in mice gavaged with TC. Additional colonization of SIHUMI mice with B. wadsworthia resulted in a mild inflammatory response, but only in mice treated with TC. In general, TC-mediated effects on the immune system and abundance of B. wadsworthia were not as strong as described in the literature.
In summary, neither the tested dietary sulfonates nor TC led to bacteria-induced intestinal inflammation in the IL-10-deficient mouse model, which was consistently observed in both conventional and gnotobiotic mice. For humans, this means that foods containing SQDG, such as spinach or Spirulina, do not increase the risk of intestinal inflammation.
Die vorliegende Publikation der Dissertationsschrift „Nutzungsfokussierte Evaluation in der Lehrkräftefortbildung Belcantare Brandenburg für musikunterrichtende Grundschul-lehrer*innen im ländlichen Raum“ ist eine akteursorientierte, explorativ angelegte Evaluation. Seit 2011 führt in den Regionen des Landes Brandenburg der Landesmusikrat Brandenburg e.V. in Kooperation mit mehreren Institutionen die zweijährige Fortbildung für fachnah sowie ausgebildete Musiklehrkräfte im Kompetenzfeld Singen und Lieddidaktik durch.
Der zugrunde liegende Evaluationsansatz stellt die Interessen der kooperierenden Partner, welche praktische Konsequenzen aus den Ergebnissen der Evaluation zu ziehen beabsichtigen, in den Mittelpunkt der Forschungsarbeit. Es handelt sich somit um eine Auftragsforschung. Der Evaluation kommen die Funktionen zu, die inhaltliche Qualität der Lehrkräftefortbildung zu sichern und zu optimieren, den Erkenntnisgewinn zur Gestaltung eines fachdidaktischen Coachings zu erweitern, die Forschungsergebnisse zur Legitimation und Partizipation sichtbar zu machen sowie analytische Entscheidungshilfe zur Weiterführung Belcantare Brandenburgs nach 2022 bereitzustellen.
Die von den Akteuren an die Autorin herangetragenen Forschungsanliegen wurden zu vier Fragestellungen zusammengefasst:
1. Wie zufrieden sind die Teilnehmenden mit der Veranstaltungsreihe?
2. Welche fachlichen, didaktischen und persönlichen Entwicklungen stellen sich während des Fortbildungszeitraumes aus der Wahrnehmungsperspektive der teilnehmenden Lehrkräfte ein?
3. Wie beurteilen die Coaching-Beteiligten die Chancen und Grenzen des musikdidaktischen Coachings als Fortbildungsform?
4. Welche Schlussfolgerungen lassen sich hinsichtlich professioneller Lehrkräftefortbildung aus der Gegenüberstellung der empirischen Erkenntnisse mit denen der Theorie ziehen?
Diese Forschungsfragen wurden in zwei Forschungsphasen beantwortet:
1. Der empirische Datenkorpus wurde zwischen 2011-2015 gebildet. In dieser Zeit hatten zur projektbegleitenden Qualitätssicherung und -weiterführung der Pilot- und Folgestaffel Belcantare Brandenburgs die Forschungsfragen 1, 2 und 3 besondere Relevanz. Die Evaluationsstudie ist explorativ angelegt: Die Variablen zu den Forschungsfragen 1 und 2 sind durch Dokumentenanalysen sowie Interview-auswertungen mit der Projektleitung und teilnehmenden Lehrkräften sukzessive herausgearbeitet. Ebenso entsprechen die halb-geschlossenen Fragebögen als zentrale Erhebungsinstrumente der Forschungsfragen 1 und 2 dem explorativen Charakter und stellen auf diesem Weg sicher, dass den Teilnehmer*innen (N=40) die Möglichkeit zum Einbringen eigener Perspektiven eingeräumt wurde. Mit der Gesamtnote „sehr gut“ (1,39) seitens der befragten Lehrkräfte gilt die Gestaltung der Veranstaltungsreihe als ein Best-Practice-Beispiel: Für die Lehrkräfte sind das handlungsorientierte Erarbeiten von schülerpassenden und thematisch geeigneten, unmittelbar einsetzbaren oder wiederholt geübten Unterrichtsinhalten, Lerngegenständen und dazu passenden Materialien für den Unterricht die wesentlichen Kriterien zur Nutzung einer solchen Professionalisierungsmaßnahme. Die Lehrkräfteentwicklungen beider beforschter Staffeln zeigen, dass die fachnahen Kräfte bei sich größere Entwicklungszuwächse nach Beendigung des Projektes wahrnehmen als die Fachkräfte. Gleichzeitig liegt die selbsteingeschätzte Fachkompetenz der fachnahen Kräfte zu Fortbildungsende unter denen der Fachkräfte.
Der Forschungsfrage 3 liegt ein ausschließlich qualitatives Design (N=16) zugrunde. Im Ergebnis konnten die Offene Form fachdidaktischen Coachings definiert werden, deren Parameter beschrieben und wesentliche Eigenschaften von Coach-Constellationen für ein binnendifferenziertes Coaching in der Lehrkräftefortbildung benannt werden.
2. Im Mai 2019 bildete sich aufgrund des sich verschärfenden Fachkräftemangels in Brandenburg das Bestreben der Kooperationspartner heraus, die Lehrkräftefortbildung nach 2022 als qualitätssichernde Maßnahme fortführen zu wollen. Diese Situation führte 2019 zur Aufnahme der Forschungsfrage 4, die eine umfassende und aktualisierte Analyse der theoretischen und bildungspolitischen Hintergründe der Intervention implizierte, mit dem Ziel, den Erkenntnisstand der Evaluation für eine erneute Empfehlung zu vertiefen. Das Thematisieren sowie das Gestalten von Selbstlernprozessen in der professionalisierenden Lehrkräftefortbildung stellte sich hierbei als ein zentrales Merkmal innovativer Lernkultur heraus.
Die Publikation gliedert sich in vier Teile: Teil I stellt den Forschungsstand zur professionalisierenden Lehrkräfte¬fortbildung aus bildungswissenschaftlicher und musikpäda-gogischer Perspektive dar. Teil II der Arbeit stellt die komplexen Begründungs-zusammenhänge zum Evaluationsgegenstand her. Im III. Teil der Arbeit ist die Evaluationsstudie zu finden. Deren induktiv erschlossene Erkenntnisse werden in Teil IV der Arbeit dem Forschungsstand zur professionalisierenden Lehrkräftefortbildung gegenübergestellt.
Staatsklimahaftung
(2021)
Klimaklagen nehmen stetig an Relevanz zu. Für rechtliche Untersuchungen gibt es dabei diverse Anknüpfungspunkte. Die Arbeit befasst sich mit der Frage, inwieweit Staatshaftungsansprüche wegen Klimaschäden gegen Deutschland bzw. die EU begründbar sind, wenn eingegangene Verpflichtungen zur Reduzierung von Treibhausgasemissionen nicht eingehalten werden. Hierfür werden der deutsche Amtshaftungsanspruch gem. § 839 BGB i.V.m. Art. 34 GG, der unionsrechtliche Staatshaftungsanspruch gegen die Mitgliedstaaten und der Anspruch aus Art. 340 Abs. 2 AEUV gegen die EU näher untersucht. Am Ende der Arbeit werden Überlegungen zu rechtspraktischen Perspektiven der Staatsklimahaftung angestellt, um die Erfolgs- und Realisierungsaussichten zu verbessern.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
The Earth's electron radiation belts exhibit a two-zone structure, with the outer belt being highly dynamic due to the constant competition between a number of physical processes, including acceleration, loss, and transport. The flux of electrons in the outer belt can vary over several orders of magnitude, reaching levels that may disrupt satellite operations. Therefore, understanding the mechanisms that drive these variations is of high interest to the scientific community.
In particular, the important role played by loss mechanisms in controlling relativistic electron dynamics has become increasingly clear in recent years. It is now widely accepted that radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause, called magnetopause shadowing. Precipitation of electrons occurs due to pitch-angle scattering by resonant interaction with various types of waves, including whistler mode chorus, plasmaspheric hiss, and electromagnetic ion cyclotron waves. In addition, the compression of the magnetopause due to increases in solar wind dynamic pressure can substantially deplete electrons at high L shells where they find themselves in open drift paths, whereas electrons at low L shells can be lost through outward radial diffusion. Nevertheless, the role played by each physical process during electron flux dropouts still remains a fundamental puzzle.
Differentiation between these processes and quantification of their relative contributions to the evolution of radiation belt electrons requires high-resolution profiles of phase space density (PSD). However, such profiles of PSD are difficult to obtain due to restrictions of spacecraft observations to a single measurement in space and time, which is also compounded by the inaccuracy of instruments. Data assimilation techniques aim to blend incomplete and inaccurate spaceborne data with physics-based models in an optimal way. In the Earth's radiation belts, it is used to reconstruct the entire radial profile of electron PSD, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment.
In this study, sparse measurements from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 are assimilated into the three-dimensional Versatile Electron Radiation Belt (VERB-3D) diffusion model, by means of a split-operator Kalman filter over a four-year period from 01 October 2012 to 01 October 2016. In comparison to previous works, the 3D model accounts for more physical processes, namely mixed pitch angle-energy diffusion, scattering by EMIC waves, and magnetopause shadowing. It is shown how data assimilation, by means of the innovation vector (the residual between observations and model forecast), can be used to account for missing physics in the model. This method is used to identify the radial distances from the Earth and the geomagnetic conditions where the model is inconsistent with the measured PSD for different values of the adiabatic invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and this is interpreted as evidence of where and when additional source or loss processes are active.
Furthermore, two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons are investigated: EMIC wave-induced scattering and magnetopause shadowing. The innovation vector is inspected for values of the invariant mu ranging from 300 to 3000 MeV/G, and a statistical analysis is performed to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. The results of this work are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. EMIC wave scattering dominates loss at lower L shells and it may amount to between 10%/hr to 30%/hr of the maximum value of PSD over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
The results of this study are two-fold. Firstly, it demonstrates that the 3D data assimilative code provides a comprehensive picture of the radiation belts and is an important step toward performing reanalysis using observations from current and future missions. Secondly, it achieves a better understanding and provides critical clues of the dominant loss mechanisms responsible for the rapid dropouts of electrons at different locations over the outer radiation belt.
There is a general consensus that diverse ecological communities are better equipped to adapt to changes in their environment, but our understanding of the mechanisms by which they do so remains incomplete. Accurately predicting how the global biodiversity crisis affects the functioning of ecosystems, and the services they provide, requires extensive knowledge about these mechanisms.
Mathematical models of food webs have been successful in uncovering many aspects of the link between diversity and ecosystem functioning in small food web modules, containing at most two adaptive trophic levels. Meaningful extrapolation of this understanding to the functioning of natural food webs remains difficult, due to the presence of complex interactions that are not always accurately captured by bitrophic descriptions of food webs. In this dissertation, we expand this approach to tritrophic food web models by including the third trophic level. Using a functional trait approach, coexistence of all species is ensured using fitness-balancing trade-offs. For example, the defense-growth trade-off implies that species may be defended against predation, but this defense comes at the cost of a lower maximal growth rate. In these food webs, the functional diversity on a given trophic level can be varied by modifying the trait differences between the species on that level.
In the first project, we find that functional diversity promotes high biomass on the top level, which, in turn, leads to a reduction in the temporal variability due to compensatory dynamical patterns governed by the top level. Next, these results are generalized by investigating the average behavior of tritrophic food webs, for wide intervals of all parameters describing species interactions in the food web. We find that the diversity on the top level is most important for determining the biomass and temporal variability of all other trophic levels, and show how biomass is only transferred efficiently to the top level when diversity is high everywhere in the food web. In the third project, we compare the response of a simple food chain against a nutrient pulse perturbation, to that of a food web with diversity on every trophic level. By joint consideration of the resistance, resilience, and elasticity, we uncover that the response is efficiently buffered when biomass on the top level is high, which is facilitated by functional diversity on every trophic level in the food web. Finally, in the fourth project, we show that even in a simple consumer-resource model without any diversity, top-down control on the intermediate level frequently causes the phase difference between the intermediate and basal level to deviate from the quarter-cycle lag rule. By adding a top predator, we show that these deviations become even more likely, and anti-phase cycles are often observed.
The combined results of these projects show how the properties of the top trophic level, including its functional diversity, have a decisive influence on the functioning of tritrophic food webs from a mechanistic perspective. Because top species are often among the most vulnerable to extinction, our results emphasize the importance of their conservation in ecosystem management and restoration strategies.
Eukaryotic cells can be regarded as complex microreactors capable of performing various biochemical reactions in parallel which are necessary to sustain life. An essential prerequisite for these complex metabolic reactions to occur is the evolution of lipid membrane-bound organelles enabling compartmental- ization of reactions and biomolecules. This allows for a spatiotemporal control over the metabolic reactions within the cellular system. Intracellular organi- zation arising due to compartmentalization is a key feature of all living cells and has inspired synthetic biologists to engineer such systems with bottom-up approaches.
Artificial cells provide an ideal platform to isolate and study specific re- actions without the interference from the complex network of biomolecules present in biological cells. To mimic the hierarchical architecture of eukaryotic cells, multi-compartment assemblies with nested liposomal structures also re- ferred to as multi-vesicular vesicles (MVVs) have been widely adopted. Most of the previously reported multi-compartment systems adopt bulk method- ologies which suffer from low yield and poor control over size. Microfluidic strategies help circumvent these issues and facilitate a high-throughput and robust technique to assemble MVVs of uniform size distribution.
In this thesis, firstly, the bulk methodologies are explored to build MVVs and implement a synthetic signalling cascade. Next, a polydimethylsiloxane (PDMS)-based microfluidic platform is introduced to build MVVs and the significance of PEGylated lipids for the successful encapsulation of inner com- partments to generate stable multi-compartment systems is highlighted.
Next, a novel two-inlet channel PDMS-based microfluidic device to create MVVs encompassing a three-step enzymatic reaction cascade is presented. A directed reaction pathway comprising of the enzymes α-glucosidase (α-Glc), glucose oxidase (GOx), and horseradish peroxidase (HRP) spanning across three compartments via reconstitution of size-selective membrane proteins is described. Furthermore, owing to the monodispersity of our MVVs due to microfluidic strategies, this platform is employed to study the effect of com- partmentalization on reaction kinetics.
Further integration of cell-free expression module into the MVVs would allow for gene-mediated signal transduction within artificial eukaryotic cells. Therefore, the chemically inducible cell-free expression of a membrane protein alpha-hemolysin and its further reconstitution into liposomes is carried out.
In conclusion, the present thesis aims to build artificial eukaryotic cells to achieve size-selective chemical communication that also show potential for applications as micro reactors and as vehicles for drug delivery.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
„If you can’t measure it, you can’t manage it.“ Dieser Slogan, der u. a. auf Peter Drucker, Henry Deming oder Robert Kaplan und David Norton zurückgehen soll, ist Ausdruck einer tiefen Überzeugung in die Notwendigkeit und den Nutzen des Performance Managements, einem Ansatz der auch die öffentliche Verwaltung erfasst und geprägt hat. Gleichzeitig impliziert er eine entscheidende Rolle von Performance Informationen. Die vorliegende Dissertation rückt das neuralgische Element Performance Information ins Zentrum des Forschungsinteresses, genauer die Verwendung von Kennzahlen.
Ausgangspunkt bildet die wissenschaftliche Beobachtung, dass Kennzahlen nicht immer und automatisch in der vom theoretischen Standpunkt aus erforderlichen und prognostizierten Art und Weise genutzt werden. Eine schlechte Implementierung des Managementansatzes oder Fehler im theoretischen Fundament sind mögliche Erklärungsansätze. Im Zuge der Analyse des Forschungsstandes ist offenkundig geworden, dass Erklärungen vor allem im organisationalen Setting und in Performance Management bezogenen Faktoren gesucht werden; ein Kennzeichen für eine eher technokratische und implementationsbezogene Perspektive auf die Verwendungsproblematik. Die aus neurowissenschaftlicher Sicht wichtige intrapersonale Ebene spielt eine ungeordnete Rolle.
In Anbetracht dessen ist auf der Grundlage neurowissenschaftlicher Erkenntnisse im Rahmen einer empirischen Untersuchung die Wirkung erfahrungsbezogener Variablen auf das Verwendungsverhalten untersucht worden. Dabei ist analysiert worden, wie Erfahrungen auf organisationaler Ebene entstehen und wie sie im Detail auf das Nutzungsverhalten wirken. Als Forschungsobjekt sind polizeiliche Führungskräfte herangezogen worden. Die Daten sind Ende 2016/Anfang 2017 online-basiert erhoben worden.
Im Ergebnis der Datenauswertung und Diskussion der Befunde sind folgende Erkenntnisse hervorzuheben:
(1) Erfahrungen beeinflussen die Verwendung von Performance Informationen. Die Art der Erfahrung mit Kennzahlen bildet dabei eine Mediatorvariable. Vor allem organisationale Faktoren, wie der Reifegrad des Performance Management Systems, wirken über den Faktor Erfahrung auf das Verwendungsverhalten.
(2) Erwähnenswert ist zudem, dass die Auseinandersetzung mit Kennzahlen sowohl den Erfahrungsschatz als auch die Nutzung von Kennzahlen positiv beeinflusst. Insgesamt haben sich die neurowissenschaftlich inspirierten Variablen als vielversprechende Erklärungsfaktoren herausgestellt.
(3) Des Weiteren hat die Arbeit bestehende Befunde abgesichert, v. a. die Wirkung des erwähnten Reifegrads. Allerdings sind auch Unterschiede aufgetreten. So büßt zum Beispiel der transformationale Führungsstil i. V. m. Art der Erfahrung seine positive Wirkung auf die Kennzahlennutzung ein.
(4) Interessant sind zudem die Ergebnisse des Labor- und Quasiexperiments. Erstmalig sind nicht zweckorientierte Verwendungsarten experimentell beobachtbar. Zudem sind neuro- und verhaltensökonomische Erklärungsansätze identifiziert und diskutiert worden, die eine Bereicherung des Forschungsdiskurses darstellen. Sie bieten eine neue Perspektive hinsichtlich des Verwendungsverhaltens und liefern Impulse für die weitere Forschung.
Für das New Public Management, in dessen Werkzeugkasten dieser Managementansatz eine Schlüsselrolle einnimmt, wiegen die Forschungsbefunde schwer. Ohne ein funktionierendes Performance Management kann das wichtige Reformziel „Wirkungsorientierung“ nicht erreicht werden. Das NPM läuft damit Gefahr, selbst Dysfunktionen zu entwickeln.
Insgesamt scheint es geboten, in der Auseinandersetzung mit Managementsystemen einen stärkeren Fokus auf intrapersonale Faktoren zu legen. Auch Verhaltensanomalien im Kontext von Management und deren Implikationen sollten näher untersucht werden. Es zeigt sich ferner, dass eine rein technokratische Sichtweise auf das Performance Management nicht zielführend ist. Folglich ist das Performance Management theoretisch wie konzeptionell fortzuentwickeln.
Die Forschungsarbeit liefert somit wichtige neue Erkenntnisse zur Verwendung von Performance Informationen und zum Verständnis von Performance Management. Vor allem erweitert sie den Forschungsdiskurs, da sie die Erklärungskraft intrapersonaler Faktoren aufgezeigt hat sowie methodisch mit dem Mixed-Method-Ansatz (Multimethod-Studie) und theoretisch mittels der Neuro- und Verhaltensökonomie neue Perspektiven hinsichtlich der Verwendungsproblematik eröffnet.
In the last five years, gravitational-wave astronomy has gone from a purerly theoretical field into a thriving experimental science. Several gravitational- wave signals, emitted by stellar-mass binary black holes and binary neutron stars, have been detected, and many more are expected in the future as consequence of the planned upgrades in the gravitational-wave detectors. The observation of the gravitational-wave signals from these systems, and the characterization of their sources, heavily relies on the precise models for the emitted gravitational waveforms. To take full advantage of the increased detector sensitivity, it is then necessary to also improve the accuracy of the gravitational-waveform models.
In this work, I present an updated version of the waveform models for spinning binary black holes within the effective-one-body formalism. This formalism is based on the notion that the solution to the relativistic two- body problem varies smoothly with the mass ratio of the binary system, from the equal-mass regime to the test-particle limit. For this reason, it provides an elegant method to combine, under a unique framework, the solution to the relativistic two-body problem in different regimes. The main two regimes that are combined under the effective-one-body formalism are the slow-motion, weak field limit (accessible through the post-Newtonian theory), and the extreme mass-ratio regime (described using the black-hole- perturbation theory). This formalism is nevertheless flexible enough to integrate information about the solution to the relativistic two-body problem obtained using other techniques, such as numerical relativity.
The novelty of the waveform models presented in this work is the inclusion of beyond-quadupolar terms in the waveforms emitted by spinning binary black holes. In fact, while the time variation of the source quadupole moment is the leading contribution to the waveforms emitted by binary black holes observable by LIGO and Virgo detectors, beyond-quadupolar terms can be important for binary systems with asymmetric masses, large total mass, or observed with large inclination angle with respect to the orbital angular momentum of the binary. For this purpose, I combine the approximate analytic expressions of these beyond-quadupolar terms, with their calculations from numerical relativity, to develop an accurate waveform model including inspiral, merger and ringdown for spinning binary black holes. I first construct this model in the simplified case of black holes with spins aligned with the orbital angular momentum of the binary, then I extend it to the case of generic spin orientations. Finally, I test the accuracy of both these models against a large number of waveforms obtained from numerical relativity. The waveform models I present in this work are the state of the art for spinning binary black holes, without restrictions in the allowed values for the masses and the spins of the system.
The measurement of the source properties of a binary system emitting gravitational waves requires to compute O(107 − 109) different waveforms. Since the waveform models mentioned before can require O(1 − 10)s to generate a single waveform, they can be difficult to use in data-analysis studies given the increasing number of sources observed by the LIGO and Virgo detectors. To overcome this obstacle, I use the reduced-order-modeling technique to develop a faster version of the waveform model for black holes with spins aligned to the orbital angular momentum of the binary. This version of the model is as accurate as the original and reduces the time for evaluating a waveform by two orders of magnitude.
The waveform models developed in this thesis have been used by the LIGO and Virgo collaborations in the inference of the source parameters of the gravitational-wave signals detected during the second observing run (O2), and first half of the third observing run (O3a) of LIGO and Virgo detectors. Here, I present a study on the source properties of the signals GW170729 and GW190412, for which I have been directly involved in the analysis. In addition, these models have been used by the LIGO and Virgo collaborations to perform tests on General Relativity employing the gravitational-wave signals detected during O3a, and to analyze the population of the observed binary black holes.
Die vorliegende Dissertation behandelt drei thematische Schwerpunkte. Im Ergebnisteil steht die chemische Synthese von sogenannten (1,7)-Naphthalenophanen im Vordergrund, die zur Substanzklasse von Cyclophanen gehören. Während zahlreiche Synthesemethoden Strategien zum Aufbau von Ringsystemen (wie z. B. von Naphthalenophanen) verfolgen, die Teil einer bereits existierenden aromatischen Struktur der Ausgangsverbindung sind, nutzen nur wenige Ansätze Reaktionen, die einen Ringschluss zum gewünschten Produkt erst im Zuge der Synthese etablieren. Eine Benzanellierung, die eine besondere Aufmerksamkeit im Arbeitskreis erfahren hat, ist die Dehydro-DIELS-ALDER-Reaktion (DDA-Reaktion). Im Rahmen dieser Arbeit konnte gezeigt werden, dass zwölf ausgewählte (1,7)-Naphthalenophane, die teilweise ringgespannt und makrozyklisch aufgebaut waren, mithilfe einer photochemischen Variante der DDA-Reaktion (PDDA-Reaktion) zugänglich gemacht werden können. Die Versuche, auf thermischem Wege (TDDA-Reaktion) (1,7)-Naphthalenophane herzustellen, misslangen. Die außergewöhnliche Reaktivität der Photoreaktanten konnte mithilfe quantenchemischer Berechnungen durch eine gefaltete Grundzustandsgeometrie erklärt werden. Darüber hinaus wurden Ringspannungen und strukturelle Spannungsindikatoren der relevanten Photoprodukte ermittelt und Trends in Abhängigkeit der Linkerlänge in den NMR-Spektren der Zielverbindungen ermittelt sowie diskutiert. Zudem zeigte eine Variation am Chromophor (Acyl-, Carbonsäure- und Carbonsäureester) der Photoreaktanten bei der Bestrahlung in Dichlormethan eine vergleichbare Photokinetik und -reaktivität. Der zweite Abschnitt dieser Dissertation ist dem Design und der Entwicklung zweier Photoreaktoren für UV-Anwendungen im kontinuierlichen Durchfluss gewidmet, da photochemische Transformationen bekanntermaßen in ihrer Skalierbarkeit limitiert sind. Im ersten Prototyp konnten mittels effizienter Parallelschaltung mit bis zu drei UV-Lampen (𝜆𝜆 = 254, 310 und 355 nm) Produktmaterialmengen von bis zu n = 188 mmol anhand eines ausgewählten Fallbeispiels erreicht werden. Im konstruktionstechnisch stark vereinfachten zweiten Photoreaktor wurden alle quarzhaltigen Elemente gegen günstigeres PLEXIGLAS® ersetzt. Das Resultat waren identische Raum-Zeit-Ausbeuten in Bezug auf das zuvor gewählte Synthesebeispiel. Demnach bietet die UV-Photochemie im kontinuierlichen Durchfluss Vorteile gegenüber der traditionellen Bestrahlung im Tauchreaktor. Hinsichtlich Reaktionszeit, Produktausbeuten und Lösemittelverbrauch ist sie synthetisch weit überlegen. Im letzten Abschnitt der Arbeit wurden diese Erkenntnisse genutzt, um biomedizinisch und pharmakologisch vielversprechende 1-Arylnaphthalen-Lignane mittels einer intramolekularen PDDA-Reaktion (IMPDDA-Reaktion) als Schlüsselschritt herzustellen. Hierzu wurden drei Konzepte erarbeitet und in der Totalsynthese von drei ausgewählten Zielstrukturen auf Basis des 1-Arylnaphthalengrundgerüsts realisiert.
Influenza A virus (IAV) is a pathogen responsible for severe seasonal epidemics threatening human and animal populations every year. During the viral assembly process in the infected cells, the plasma membrane (PM) has to bend in localized regions into a vesicle towards the extracellular side. Studies in cellular models have proposed that different viral proteins might be responsible for inducing membrane curvature in this context (including M1), but a clear consensus has not been reached. M1 is the most abundant protein in IAV particles. It plays an important role in virus assembly and budding at the PM. M1 is recruited to the host cell membrane where it associates with lipids and other viral proteins. However, the details of M1 interactions with the cellular PM, as well as M1-mediated membrane bending at the budozone, have not been clarified.
In this work, we used several experimental approaches to analyze M1-lipids and M1-M1 interactions. By performing SPR analysis, we quantified membrane association for full-length M1 and different genetically engineered M1 constructs (i.e., N- and C-terminally truncated constructs and a mutant of the polybasic region). This allowed us to obtain novel information on the protein regions mediating M1 binding to membranes. By using fluorescence microscopy, cryogenic transmission electron microscopy (cryo-TEM), and three-dimensional (3D) tomography (cryo-ET), we showed that M1 is indeed able to cause membrane deformation on vesicles containing negatively-charged lipids, in the absence of other viral components. Further, sFCS analysis proved that simple protein binding is not sufficient to induce membrane restructuring. Rather, it appears that stable M1-M1 interactions and multimer formation are required to alter the bilayer three-dimensional structure through the formation of a protein scaffold.
Finally, to mimic the budding mechanism in cells that arise by the lateral organization of the virus membrane components on lipid raft domains, we created vesicles with lipid domains. Our results showed that local binding of M1 to spatial confined acidic lipids within membrane domains of vesicles led to local M1 inward curvature.
Die vorliegende Arbeit befasst sich mit Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund. Dabei wurden vor allem der Bezug dieser Gründungen zu der Umwelt – dem Gründerökosystem –, in der sie stattfinden, sowie ihre gegenseitigen Wechselwirkungen untersucht. Der Forschungsgegenstand ist die Schnittstelle aus den Bereichen Gründungen, Migrantentum und Hochqualifikation. Der Fokus auf die sehr spezifische Zielgruppe Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund füllt eine wichtige Lücke in der bisherigen Forschung.
Methodisch gesehen bedient sich diese Arbeit eines theoretischen Bezugsrahmens. Dieser besteht aus der neoinstitutionalistischen Organisationstheorie (Meyer & Rowan 1977), dem Ressourcenabhängigkeitsansatz (Pfeffer & Salancik 1978) sowie dem sechs-dimensionalen Modell des Gründerökosystems (Isenberg 2011). Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund müssen ihre interne Ausgestaltung an die Anforderung der institutionellen Umwelt anpassen, um die notwendige Legitimität zu sichern. Dadurch können bei unterschiedlichen Gründungen isomorphe Organisationsstrukturen entstehen. Darüber hinaus können akademische Gründende mit Migrationshintergrund durch interorganisatorische Aktivitäten den Zugang zu nicht-substituierbaren Ressourcen für die Unternehmensgründung bzw. Geschäftsentwicklung ermöglichen bzw. erleichtern. Daher ist die Kombination beider Theorien und des Erklärungsansatzes ein effektives und passendes Analysetool für die vorliegende Forschungsarbeit und schafft sowohl auf Mikro- als auch auf Makroebene für die Leserinnen und Leser ein vollständiges Gesamtbild.
Die vorliegende Arbeit beinhaltet nicht nur Daten aus Sekundärquellen und bereits vorhandenen quantitativen Studien im deskriptiven Teil, sondern auch direkte Informationen durch eigene qualitative Untersuchung im empirischen Teil. Dafür wurden insgesamt 23 semistrukturierte Experteninterviews durchgeführt. Durch die Inhaltsanalyse nach Mayring (2014) wurden mehrere Kategorien herausgefiltert; dazu zählen bspw. umweltbezogene Einflussfaktoren auf Legitimität sowie nicht-substituierbare Ressourcen für Gründungen durch Akademikerinnen und Akademiker. Darüber hinaus wurden durch die Empirie einige Hypothesen für weitere quantitative Forschungen in der Zukunft aufgestellt und konkrete Handlungsempfehlungen für die Praxis gegeben.
Die herausragenden mechanischen Eigenschaften natürlicher anorganisch-organischer Kompositmaterialien wie Knochen oder Muschelschalen entspringen ihrer hierarchischen Struktur, die von der nano- bis hinauf zur makroskopischen Ebene reicht, und einer kontrollierten Verbindung entlang der Grenzflächen der anorganischen und organischen Komponenten.
Ausgehend von diesen Schlüsselprinzipien des biologischen Materialdesigns wurden in dieser Arbeit zwei Konzepte für die bioinspirierte Strukturbildung von Kompositen untersucht, die auf dem Verkleben von Nano- oder Mesokristallen mit funktionalisierten Poly(2-oxazolin)-Blockcopolymeren beruhen sowie deren Potenzial zur Herstellung bioinspirierter selbstorganisierter hierarchischer anorganisch-organischer Verbundstrukturen ohne äußere Kräfte beleuchtet. Die Konzepte unterschieden sich in den verwendeten anorganischen Partikeln und in der Art der Strukturbildung.
Über einen modularen Ansatz aus Polymersynthese und polymeranaloger Thiol-En-Funktionalisierung wurde erfolgreich eine Bibliothek von Poly(2-oxazolin)en mit unterschiedlichen Funktionalitäten erstellt. Die Blockcopolymere bestehen aus einem kurzen partikelaffinen "Klebeblock", der aus Thiol-En-funktionalisiertem Poly(2-(3-butenyl)-2-oxazolin) besteht, und einem langen wasserlöslichen, strukturbildenden Block, der aus thermoresponsivem und kristallisierbarem Poly(2-isopropyl-2-oxazolin) besteht und hierarchische Morphologien ausbildet. Verschiedene analytische Untersuchungen wie Turbidimetrie, DLS, DSC, SEM oder XRD machten das thermoresponsive bzw. das Kristallisationsverhalten der Blockcopolymere in Abhängigkeit vom eingeführten Klebeblock zugänglich. Es zeigte sich, dass diese Polymere ein komplexes temperatur- und pH-abhängiges Trübungsverhalten aufweisen. Hinsichtlich der Kristallisation änderte der Klebeblock nicht die nanoskopische Kristallstruktur; er beeinflusste jedoch die Kristallisationszeit, den Kristallisationsgrad und die hierarchische Morphologie. Dieses Ergebnis wurde auf das unterschiedliche Aggregationsverhalten der Polymere in Wasser zurückgeführt.
Für die Herstellung von Kompositen nutzte Konzept 1 mikrometergroße Kupferoxalat-Mesokristalle, die eine innere Nanostruktur aufweisen. Die Strukturbildung über den anorganischen Teil wurde durch das Verkleben und Anordnen dieser Partikel erstrebt. Konzept 1 ermöglichte homogene freistehende stabile Kompositfilme mit einem hohen anorganischen Anteil. Die Partikel-Polymer-Kombination vereinte jedoch ungünstige Eigenschaften in sich, d. h. ihre Längenskalen waren zu unterschiedlich, was die Selbstassemblierung der Partikel verhinderte. Aufgrund des geringen Aspektverhältnisses von Kupferoxalat blieb auch die gegenseitige Ausrichtung durch äußere Kräfte erfolglos. Im Ergebnis eignet sich das Kupferoxalat-Poly(2-oxazolin)-Modellsystem nicht für die Herstellung hierarchischer Kompositstrukturen.
Im Gegensatz dazu verwendet Konzept 2 scheibenförmige Laponit®-Nanopartikel und kristallisierbare Blockcopolymere zur Strukturbildung über die organische Komponente durch polymervermittelte Selbstassemblierung. Komplementäre Analysemethoden (Zeta-Potenzial, DLS, SEM, XRD, DSC, TEM) zeigten sowohl eine kontrollierte Wechselwirkung zwischen den Komponenten in wässriger Umgebung als auch eine kontrollierte Strukturbildung, die in selbstassemblierten Nanokompositen resultiert, deren Struktur sich über mehrere Längenskalen erstreckt. Es wurde gezeigt, dass die negativ geladenen Klebeblöcke spezifisch und selektiv an den positiv geladenen Rändern der Laponit®-Partikel binden und so Polymer-Laponit®-Nanohybridpartikel entstehen, die als Grundbausteine für die Kompositbildung dienen. Die Hybridpartikel sind bei Raumtemperatur elektrosterisch stabilisiert - sterisch durch ihre langen, mit Wasser wechselwirkenden Poly(2-isopropyl-2-oxazolin)-Blöcke und elektrostatisch über die negativ geladenen Laponit®-Flächen. Im Ergebnis ließ sich Konzept 2 und damit die Strukturbildung über die organische Komponente erfolgreich umsetzten. Das Laponit®-Poly(2-oxazolin)-Modellsystem eröffnete den Weg zu selbstassemblierten geschichteten quasi-hierarchischen Nanokompositstrukturen mit hohem anorganischen Anteil. Abhängig von der frei verfügbaren Polymerkonzentration bei der Kompositbildung entstanden zwei unterschiedliche Komposit-Typen. Darüber hinaus entwarf die Arbeit einen Erklärungsansatz für den polymervermittelten Bildungsprozess der Komposit-Strukturen.
Insgesamt legt diese Arbeit Struktur-Prozess-Eigenschafts-Beziehungen offen, um selbstassemblierte bioinspirierte Kompositstrukturen zu bilden und liefert neue Einsichten zu einer geeigneten Kombination an Komponenten und Herstellungsbedingungen, die eine kontrollierte selbstassemblierte Strukturbildung mithilfe funktionalisierter Poly(2-oxazolin)-Blockcopolymere erlauben.
Fluids in the Earth's crust can move by creating and flowing through fractures, in a process called `hydraulic fracturing’. The tip-line of such fluid-filled fractures grows at locations where stress is larger than the strength of the rock. Where the tip stress vanishes, the fracture closes and the fluid-front retreats. If stress gradients exist on the fracture's walls, induced by fluid/rock density contrasts or topographic stresses, this results in an asymmetric shape and growth of the fracture, allowing for the contained batch of fluid to propagate through the crust.
The state-of-the-art analytical and numerical methods to simulate fluid-filled fracture propagation are two-dimensional (2D). In this work I extend these to three dimensions (3D). In my analytical method, I approximate the propagating 3D fracture as a penny-shaped crack that is influenced by both an internal pressure and stress gradients. In addition, I develop a numerical method to model propagation where curved fractures can be simulated as a mesh of triangular dislocations, with the displacement of faces computed using the displacement discontinuity method. I devise a rapid technique to approximate stress intensity and use this to calculate the advance of the tip-line. My 3D models can be applied to arbitrary stresses, topographic and crack shapes, whilst retaining short computation times.
I cross-validate my analytical and numerical methods and apply them to various natural and man-made settings, to gain additional insights into the movements of hydraulic fractures such as magmatic dikes and fluid injections in rock. In particular, I calculate the `volumetric tipping point’, which once exceeded allows a fluid-filled fracture to propagate in a `self-sustaining’ manner. I discuss implications this has for hydro-fracturing in industrial operations. I also present two studies combining physical models that define fluid-filled fracture trajectories and Bayesian statistical techniques. In these studies I show that the stress history of the volcanic edifice defines the location of eruptive vents at volcanoes. Retrieval of the ratio between topographic to remote stresses allows for forecasting of probable future vent locations. Finally, I address the mechanics of 3D propagating dykes and sills in volcanic regions. I focus on Sierra Negra volcano in the Gal\'apagos islands, where in 2018, a large sill propagated with an extremely curved trajectory. Using a 3D analysis, I find that shallow horizontal intrusions are highly sensitive to topographic and buoyancy stress gradients, as well as the effects of the free surface.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
Forschendes Lernen und die digitale Transformation sind zwei der wichtigsten Einflüsse auf die Entwicklung der Hochschuldidaktik im deutschprachigen Raum. Während das forschende Lernen als normative Theorie das sollen beschreibt, geben die digitalen Werkzeuge, alte wie neue, das können in vielen Bereichen vor.
In der vorliegenden Arbeit wird ein Prozessmodell aufgestellt, was den Versuch unternimmt, das forschende Lernen hinsichtlich interaktiver, gruppenbasierter Prozesse zu systematisieren. Basierend auf dem entwickelten Modell wurde ein Softwareprototyp implementiert, der den gesamten Forschungsprozess begleiten kann. Dabei werden Gruppenformation, Feedback- und Reflexionsprozesse und das Peer Assessment mit Bildungstechnologien unterstützt. Die Entwicklungen wurden in einem qualitativen Experiment eingesetzt, um Systemwissen über die Möglichkeiten und Grenzen der digitalen Unterstützung von forschendem Lernen zu gewinnen.
Ground-based astronomy is set to employ next-generation telescopes with apertures larger than 25 m in diameter before this decade is out. Such giant telescopes observe their targets through a larger patch of turbulent atmosphere, demanding that most of the instruments behind them must also grow larger to make full use of the collected stellar flux. This linear scaling in size greatly complicates the design of astronomical instrumentation, inflating their cost quadratically. Adaptive optics (AO) is one approach to circumvent this scaling law, but it can only be done to an extent before the cost of the corrective system itself overwhelms that of the instrument or even that of the telescope. One promising technique for miniaturizing the instruments and thus driving down their cost is to replace some, or all, of the free space bulk optics in the optical train with integrated photonic components.
Photonic devices, however, do their work primarily in single-mode waveguides, and the atmospherically-distorted starlight must first be efficiently coupled into them if they are to outperform their bulk optic counterparts. This is doable by two means: AO systems can again help control the angular size and motion of seeing disks to the point where they will couple efficiently into astrophotonic components, but this is only feasible for the brightest of objects and over limited fields of view. Alternatively, tapered fiber devices known as photonic lanterns — with their ability to convert multimode into single-mode optical fields — can be used to feed speckle patterns into single-mode integrated optics. They, nonetheless, must conserve the degrees of freedom, and the number of output waveguides will quickly grow out of control for uncorrected large telescopes. An AO-assisted photonic lantern fed by a partially corrected wavefront presents a compromise that can have a manageable size if the trade-off between the two methods is chosen carefully. This requires end-to-end simulations that take into account all the subsystems upstream of the astrophotonic instrument, i.e., the atmospheric layers, the telescope, the AO system, and the photonic lantern, before a decision can be made on sizing the multiplexed integrated instrument.
The numerical models that simulate atmospheric turbulence and AO correction are presented in this work. The physics and models for optical fibers, arrays of waveguides, and photonic lanterns are also provided. The models are on their own useful in understanding the behavior of the individual subsystems involved and are also used together to compute the optimum sizing of photonic lanterns for feeding astrophotonic instruments. Additionally, since photonic lanterns are a relatively new concept, two novel applications are discussed for them later in this thesis: the use of mode-selective photonic lanterns (MSPLs) to reduce the multiplicity of multiplexed integrated instruments and the combination of photonic lanterns with discrete beam combiners (DBCs) to retrieve the modal content in an optical waveguide.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
This paper-based dissertation aims to contribute to the open innovation (OI) and technology management (TM) research fields by investigating their mechanisms, and potentials at the operational level. The dissertation connects the well-known concept of technology management with OI formats and applies these on specific manufacturing technologies within a clearly defined setting.
Technological breakthroughs force firms to continuously adapt and reinvent themselves. The pace of technological innovation and their impact on firms is constantly increasing due to more connected infrastructure and accessible resources (i.e. data, knowledge). Especially in the manufacturing sector it is one key element to leverage new technologies to stay competitive. These technological shifts call for new management practices.
TM supports firms with various tools to manage these shifts at different levels in the firm. It is a multifunctional and multidisciplinary field as it deals with all aspects of integrating technological issues into business decision-making and is directly relevant to a number of core business processes. Thus, it makes sense to utilize this theory and their practices as a foundation of this dissertation. However, considering the increasing complexity and number of technologies it is not sufficient anymore for firms to only rely on previous internal R&D and managerial practices. OI can expanse these practices by involving distributed innovation processes and accessing further external knowledge sources. This expansion can lead to an increasing innovation performance and thereby accelerate the time-to-market of technologies.
Research in this dissertation was based on the expectations that OI formats will support the R&D activities of manufacturing technologies on the operational level by providing access to resources, knowledge, and leading-edge technology. The dissertation represents uniqueness regarding the rich practical data sets (observations, internal documents, project reviews) drawn from a very large German high-tech firm. The researcher was embedded in an R&D unit within the operational TM department for manufacturing technologies. The analyses include 1.) an exploratory in-depth analysis of a crowdsourcing initiative to elaborate the impact on specific manufacturing technologies, 2.) a deductive approach for developing a technology evaluation score model to create a common understanding of the value of selected manufacturing technologies at the operational level, and 3.) an abductive reasoning approach in form of a longitudinal case study to derive important indicator for the in-process activities of science-based partnership university-industry collaboration format. Thereby, the dissertation contributed to research and practice 1.) linkages of TM and OI practices to assimilate technologies at the operational level, 2.) insights about the impact of CS on manufacturing technologies and a related guideline to execute CS initiatives in this specific environment 3.) introduction of manufacturing readiness levels and further criteria into the TM and OI research field to support decision-makers in the firm in gaining a common understanding of the maturity of manufacturing technologies and, 4.) context-specific important indicators for science based university-industry collaboration projects and a holistic framework to connect TM with the university-industry collaboration approach
The findings of this dissertation illustrate that OI formats can support the acceleration of time-to-market of manufacturing technologies and further improve the technical requirements of the product by leveraging external capabilities. The conclusions and implications made are intended to foster further research and improve managerial practices to evolve TM into an open collaborative context with interconnectivities between all internal and external involved technologies, individuals and organizational levels.
Die Charedim, die isolationistisch-fundamentalistisch lebenden ultraorthodoxen Juden, sind die am schnellsten wachsende Bevölkerungsgruppe in Israel. Bis Mitte des 21. Jahrhunderts, so sagen Prognosen voraus, wird ihr Anteil auf fast ein Drittel der Juden in Israel angewachsen sein.
In seiner Studie, einer der ersten ihrer Art im deutschsprachigen Raum, beschreibt Eik Dödtmann die aktuellen Entwicklungen und Wechselwirkungen zwischen der säkular-jüdischen Mehrheit und der strengreligiösen charedischen Gesellschaft. Er untersucht dabei den politischen Einfluss der Charedim auf die Innen- und Außenpolitik Israels, die juristische Konstellation einer Semi-Theokratie und ihren Einfluss auf die Freiheit des Individuums sowie die Problemfelder der Integration charedischer Männer in den Arbeitsmarkt, des Wehrdienstes und des öffentlichen Lebens am Schabbat, dem heiligen Tag der jüdischen Woche.
We developed an orbital tuned age model for the composite Chew Bahir sediment core, obtained from the Chew Bahir basin (CHB), southern Ethiopia. To account for the effects of sedimentation rate changes on the spectral expression of the orbital cycles we developed a new method: the Multi-band Wavelet Age modeling technique (MUBAWA). By using a Continuous Wavelet Transformation, we were able to track frequency shifts that resulted from changing sedimentation rates and thus calculated tuned age model encompassing the last 620 kyrs. The results show a good agreement with the directly dated age model that is available from the dating of volcanic ashes. Then we used the XRF data from CHB and developed a new and robust humid-arid index of east African climate during the last 620 kyrs. To disentangle the relationship of the selected elements we performed a principal component analysis (PCA). In a following step we applied a continuous wavelet transformation on the PC1, using the directly dated age model. The resulting wavelet power spectrum, unlike a normal power spectrum, displays the occurrence of cycles/frequencies in time. The results highlight that the precession cycles are most dominantly expressed under the 400 kyrs eccentricity maximum whereas weakly expressed during eccentricity minimum. This suggests that insolation is a key driver of the climatic variability observed at CHB throughout the last 620 kyrs. In addition, the prevalence of half-precession and obliquity signals was documented. The latter is attributed to the inter-tropical insolation gradient and not interpreted as an imprint of high latitudes forcing on climatic changes in the tropics. In addition, a windowed analysis of variability was used to detect changes in variance over time and showed that strong climate variability occurred especially along the transition from a dominant insolation-controlled humid climate background state towards a predominantly dry and less-insolation controlled climate. The last chapter dealt with non-linear aspects of climate changes represented by the sediments of the CHB. We use recurrence quantification analysis to detect non-linear changes within the potassium concentration of Chew Bahir sediment cores during the last 620 kyrs. The concentration of potassium in the sediments of the lake is subject to geochemical processes related to the evaporation rate of the lake water at the time of deposition. Based on recurrence analysis, two types of variabilities could be distinguished. Type 1 represents slow variations within the precession period bandwidth of 20 kyrs and a tendency towards extreme climatic events whereas type 2 represents fast, highly variable climatic transitions between wet and dry climate states. While type 1 variability is linked to eccentricity maxima, type 2 variability occurs during the 400 kyrs eccentricity minimum. The climate history presented here shows that during high eccentricity a strongly insolation-driven climate system prevailed, whereas during low eccentricity the climate was more strongly affected by short-term variability changes. The short-term environmental changes, reflected in the increased variability might have influenced the evolution, technological advances and expansion of early modern humans who lived in this region. In the Olorgesaille Basin the temporal changes in the occurrence of stone tools, which bracket the transition from Acheulean to Middle Stone Age (MSA) technologies at between 499–320 kyrs, could potentially correlate to the marked transition from a rather stable climate with less variability to a climate with increased variability in the CHB. We conclude that populations of early anatomically modern humans are more likely to have experienced climatic stress during episodes of low eccentricity, associated with dry and high variability climate conditions, which may have led to technological innovation, such as the transition from the Acheulean to the Middle Stone Age.
Ionizing radiation is used in cancer radiation therapy to effectively damage the DNA of tumors leading to cell death and reduction of the tumor tissue. The main damage is due to generation of highly reactive secondary species such as low-energy electrons (LEE) with the most probable energy around 10 eV through ionization of water molecules in the cells. A simulation of the dose distribution in the patient is required to optimize the irradiation modality in cancer radiation therapy, which must be based on the fundamental physical processes of high-energy radiation with the tissue. In the present work the accurate quantification of DNA radiation damage in the form of absolute cross sections for LEE-induced DNA strand breaks (SBs) between 5 and 20 eV is done by using the DNA origami technique. This method is based on the analysis of well-defined DNA target sequences attached to DNA origami triangles with atomic force microscopy (AFM) on the single molecule level. The present work focuses on poly-adenine sequences (5'-d(A4), 5'-d(A8), 5'-d(A12), 5'-d(A16), and 5'- d(A20)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. Additionally, DNA double strand breaks from a DNA hairpin 5'-d(CAC)4T(Bt-dT)T2(GTG)4 are examined for the first time and are compared with those of DNA single strands 5'-d(CAC)4 and 5'- d(GTG)4. The irradiation is made in the most likely energy range of 5 to 20 eV with an anionic resonance maximum around 10 eV independently of the DNA sequence. There is a clear difference between σSSB and σDSB of DNA single and double strands, where the strand break for ssDNA are always higher in all electron energies compared to dsDNA by the factor 3. A further part of this work deals with the characterization and analysis of new types of radiosensitizers used in chemoradiotherapy, which selectively increases the DNA damage upon radiation. Fluorinated DNA sequences with 2'-fluoro-2'-deoxycytidine (dFC) show an increased sensitivity at 7 and 10 eV compared to the unmodified DNA sequences by an enhancement factor between 2.1 and 2.5. In addition, light-induced oxidative damage of 5'-d(GTG)4 and 5'-d((CAC)4T(Bt-dT)T2(GTG)4) modified DNA origami triangles by singlet oxygen 1O2 generated from three photoexcited DNA groove binders [ANT994], [ANT1083] and [Cr(ddpd)2][BF4]3 illuminated in different experiments with UV-Vis light at 430, 435 and 530 nm wavelength is demonstrated. The singlet oxygen induced generation of DNA damage could be detected in both aqueous and dry environments for [ANT1083] and [Cr(ddpd)2][BF4]3.
In the light of climate change, rising demands for agricultural products and the intensification and specialization of agricultural systems, ensuring an adequate and reliable supply of food is fundamental for food security. Maintaining diversity and redundancy has been postulated as one generic principle to increase the resilience of agricultural production and other ecosystem services. For example, if one crop fails due to climate instability and extreme events, others can compensate the losses. Crop diversity might be particularly important if different crops show asynchronous production trends. Furthermore, spatial heterogeneity has been suggested to increase stability at larger scales as production losses in some areas can be buffered by surpluses in undisturbed ones. Besides systematically investigating the mechanisms underlying stability, identifying transformative pathways that foster them is important.
In my thesis, I aim at answering the following questions: (i) How does yield stability differ between nations, regions and farms, and what is the effect of crop diversity on yield stability in relation to agricultural inputs, climate heterogeneity, climate instability and time at the national, regional or farm level? (ii) Is asynchrony between crops a better predictor of production stability than crop diversity? (iii) What is the effect of asynchrony between and within crops on stability and how is it related to crop diversity and space, respectively? (iv) What is the state of the art and what are knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with agent-based models and what are potential ways forward?
In the first chapter, I provide the theoretical background for the subsequent analyses. I stress the need to better understand the resilience of social-ecological systems and particularly the stability of agricultural production. Moreover, I introduce diversity and spatial heterogeneity as two prominently discussed resilience mechanisms and describe approaches to assess resilience.
In the second chapter, I combined agriculture and climate data at three levels of organization and spatial extents to investigate yield stability patterns and their relation to crop diversity, fertilizer, irrigation, climate heterogeneity and instability and time of nations globally, regions in Europe and farms in Germany using statistical analyses. Yield stability decreased from the national to the farm level. Several nations and regions substantially contributed to larger-scale stability. Crop diversity was positively associated with yield stability across all three levels of organization. This effect was typically more profound at smaller scales and in variable climates. In addition to crop diversity, climate heterogeneity was an important stabilizing mechanism especially at larger scales. These results confirm the stabilizing effect of crop diversity and spatial heterogeneity, yet their importance depends on the scale and agricultural management.
Building on the findings of the second chapter, I deepened in the third chapter my research on the effect of crop diversity at the national level. In particular, I tested if asynchrony between crops, i.e. between the temporal production patterns of different crops, better predicts agricultural production stability than crop diversity. The stabilizing effect of asynchrony was multiple times higher than the effect of crop diversity, i.e. asynchrony is one important property that can explain why a higher diversity supports the stability of national food production. Therefore, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered.
The previous chapters suggest that both asynchrony between crops and spatial heterogeneity are important stabilizing mechanisms. In the fourth chapter, I therefore aimed at better understanding the relative importance of asynchrony between and within crops, i.e. between the temporal production patterns of different crops and between the temporal production patterns of different cultivation areas of the same crop. Better understanding their relative importance is important to inform agricultural management decisions, but so far this has been hardly assessed. To address this, I used crop production data to study the effect of asynchrony between and within crops on the stability of agricultural production in regions in Germany and nations in Europe. Both asynchrony between and within crops consistently stabilized agricultural production. Adding crops increased asynchrony between crops, yet this effect levelled off after eight crops in regions in Germany and after four crops in nations in Europe. Combining already ten farms within a region led to high asynchrony within crops, indicating distinct production patters, while this effect was weaker when combining multiple regions within a nation. The results suggest, that both mechanisms need to be considered in agricultural management strategies that strive for more resilient farming systems.
The analyses in the foregoing chapters focused at different levels of organization, scales and factors potentially influencing agricultural stability. However, these statistical analyses are restricted by data availability and investigate correlative relationships, thus they cannot provide a mechanistic understanding of the actual processes underlying resilience. In this regard, agent-based models (ABM) are a promising tool. Besides their ability to measure different properties and to integrate multiple situations through extensive manipulation in a fully controlled system, they can capture the emergence of system resilience from individual interactions and feedbacks across different levels of organization. In the fifth chapter, I therefore reviewed the state of the art and potential knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with ABMs. Next, I derived recommendations for a more effective use of ABMs in resilience research. The review suggests that the potential of ABMs is not utilized in most models as they typically focus on a single dimension of resilience and are mostly limited to one reference state, disturbance type and scale. Moreover, only few studies explicitly test the ability of different mechanisms to support resilience. To solve real-world problems related to the resilience of complex systems, ABMs need to assess multiple stability properties for different situations and under consideration of the mechanisms that are hypothesized to render a system resilient.
In the sixth chapter, I discuss the major conclusions that can be drawn from the previous chapters. Moreover, I showcase the use of simulation models to identify management strategies to enhance asynchrony and thus stability, and the potential of ABMs to identify pathways to implement such strategies.
The results of my thesis confirm the stabilizing effect of crop diversity, yet its importance depends on the scale, agricultural management and climate. Moreover, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered. As spatial heterogeneity and particularly asynchrony within crops strongly enhances stability, integrated management approaches are needed that simultaneously address multiple resilience mechanisms at different levels of organization, scales and time horizons. For example, the simulation suggests that only increasing the number of crops at both the pixel and landscape level avoids trade-offs between asynchrony between and within crops. If their potential is better exploited, agent-based models have the capacity to systematically assess resilience and to identify comprehensive pathways towards resilient farming systems.
Die Rechtsfolgen einer fehlgeschlagenen Anteilsabtretung stellen ein rechtspraktisches Problem des GmbH-Rechts dar, dessen Lösungswege üblicherweise im Schuldrecht gesucht werden. Der Anwendung der schuldrechtlichen Rechtsinstitute im Rahmen des § 16 Abs. 1 GmbHG geht die Frage voran, ob das durch die Legitimationswirkung charakterisierte Auseinanderfallen von Herrschaftsrecht und Herrschaftsmöglichkeit zur Anwendung des Eigentümer-Besitzer-Verhältnisses führen kann. Der Übertragbarkeit der drei Komponenten der Vindikationslage – Sache, Eigentum, Besitz – und deren Interdependenzen auf die durch die Legitimationswirkung hervorgerufene Situation widmet sich diese Arbeit. Im Wege des systematischen Vergleichs der Rechtsbeziehungen weist der Autor eine Vergleichbarkeit nach und schlägt eine analoge Anwendung des Eigentümer-Besitzer-Verhältnisses vor. Im Mittelpunkt stehen die Rechtsgegenständlichkeit der Mitgliedschaft und die Ausdehnung des Besitzbegriffs auf unkörperliche Rechtsprodukte.
Many children struggle with reading for comprehension. Reading is a complex cognitive task depending on various sub-tasks, such as word decoding and building connections across sentences. The task of connecting sentences is guided by referential expressions. References, such as anaphoric noun phrases (Minky/the cat) or pronouns (Minky/she), signal to the reader how the protagonists of adjacent sentences are connected. Readers construct a coherent mental model of the text by resolving these references. Personal pronouns (he/she) in particular need to be resolved towards an appropriate antecedent before they can be fully understood. Pronoun resolution therefore is vital for successful text comprehension. The present thesis investigated children’s resolution of personal pronouns during natural reading as a possible source of reading comprehension difficulty. Three eye tracking studies investigated whether children aged 8-9 (Grade 3-4) resolve pronouns online during reading and how the varying information around the pronoun region influences children’s eye movement behavior.
The first study investigated whether children prefer a pronoun over a noun phrase when the antecedent is highly accessible. Children read three-sentence stories that introduced a protagonist (Mia) in the first sentence and a reference to this protagonist in one of the following sentences using either a repeated name (Mia) or a pronoun (she). For proficient readers, it was repeatedly shown that there is a preference for a pronoun over the name in these contexts, i.e., when the antecedent is salient. The first study tested the repeated name penalty effect in children using eye tracking. It was hypothesized that in contrast to proficient readers, the fluency of children’s reading processing profits from an overlapping word form (i.e., the repeated noun phrase) compared to a pronoun. This is because overlapping word forms allow for direct mapping, whereas pronouns have to be resolved towards their antecedent first.
The second study investigated children’s online processing of pronominal gender in a mismatch paradigm. Children read sentences in which the pronoun either was a gender-match to the antecedent or a gender-mismatch. Reading skill and reading fluency were also tested and related to children’s ability to detect a mismatching pronoun during reading.
The third study investigated the online processing of gender information on the pronoun and whether disambiguating gender information improves the accuracy of pronoun comprehension. Offline comprehension accuracy, that is the comprehension of the pronoun, was related to children’s online eye movement behavior. This study was conducted in a semi-longitudinal paradigm: 70 children were tested in Grade 3 (age 8) and again in Grade 4 (age 9) to investigate effects of age and reading skill on pronoun processing and comprehension.
The results of this thesis clearly show that children aged 8-9, when they are in the second half of primary school, struggle with the comprehension of pronouns in reading tasks. The responses to pronoun comprehension questions revealed that children have difficulties with the comprehension of a pronoun in the absence of a disambiguating gender cue, that is when they have to apply context information. When there is a gender cue to disambiguate the pronoun, children’s accuracy improves significantly. This is true for children in Grade 3, but also in Grade 4, albeit their overall resolution accuracy slightly improves with age.
The results from the analyses of eye movements suggest that the discourse accessibility of an antecedent does play a role in children’s processing of pronouns and repeated names. The repetition of a name does not facilitate children’s reading processing like it was anticipated. Similar to adults, children showed a penalty effect for the repeated name where a pronoun is expected. However, this does not mean that children’s processing of pronouns is always adult-like. The results from eye movement analyses in the pronoun region during sentence reading revealed significant individual differences related to children’s individual reading skill and reading fluency.
The results from the mismatch study revealed that reading fluency is associated with children’s detection of incongruent pronouns. All children had longer gaze durations at mismatching than matching pronouns, but only fluent readers among the children followed this up with a regression out of the pronoun region. This was interpreted as an attempt to gain processing time and “repair” the inconsistency. Reading fluency was therefore associated with detection of the mismatch, while less fluent readers did not see any mismatch between pronoun and antecedent. The eye movement pattern of the “detectors” is more adult-like and was interpreted as reflecting successful monitoring and attempted pronoun resolution.
Children differ considerably in their reading comprehension skill. The results of this thesis show that only skilled readers among the children use gender information online for pronoun resolution. They took more time to read the pronoun when there was disambiguating gender information that was useful to resolve the pronoun, in contrast to the less skilled readers. Age was a less important factor in pronoun resolution processes and comprehension than were reading skill and reading fluency. Taken together, this suggests that the good readers direct cognitive resources towards pronoun resolution when the pronoun can be resolved, which is a successful comprehension strategy. Moreover, there was evidence that reading skill is a relevant factor in this task but not age.
The contribution of the present thesis is a depiction of the specific eye movement patterns that are related to successful and unsuccessful attempts at pronoun resolution in children. Eye movement behavior in the pronoun area is related to children’s reading skill and fluency. The results of this thesis suggest that many children do not resolve pronouns spontaneously during sentence reading, which is likely detrimental to their reading comprehension in more complex reading materials. The present thesis informs our understanding of the challenge that pronoun resolution poses for beginning readers, and gives new impulses for the study of higher-order reading processes in children’s natural reading.
Humans are frequently exposed to a variety of endocrine disrupting chemicals (EDCs), which can cause harmful effects, e.g. disturbance of growth, development and reproduction, and cancer (UBA, 2016). EDCs are often components of synthetically manufactured products. Materials made of plastics, building materials, electronic items, textiles or cosmetic products can be particularly contaminated (Ain et al., 2021). One group of EDCs that has gained increased interest in recent years is phthalates. They are used as plasticizers in plastic materials to which people are daily exposed to. Phthalate plasticizers exert their harmful effects among others via activation of the estrogen receptor α (ERα), the estrogen receptor β (ERβ) and via inhibition of the androgen receptor (AR). Some phthalates have already been classified by the EU as Cancerogenic-, Mutagenic-, Reprotoxic- (CMR) substances and their use in industry has been restricted. After oral ingestion, phthalates are metabolized and are finally excreted with the urine. Numerous toxicological studies exist on phthalates, but mainly with the parent substances, not with their primary and secondary metabolites. In the course of the restriction of phthalates by the EU, the phthalate-free plasticizer di-isononylcyclohexane-1,2-dicarboxylate (DINCH®), was introduced to the market. So far, almost no toxicologically relevant properties have been identified for DINCH®. However, the effects of DINCH® have only been studied in animal experiments and, as with phthalates, almost exclusively with the parent substance. However, toxic effects of a particular compound may be induced by its metabolites and not by the parent compound itself. Therefore, potential endocrine effects of 15 phthalates, 19 phthalate metabolites, DINCH®, and five of its metabolites were investigated using reporter gene assays on the ERα, ERβ, and the AR. In addition, studies of the influence of some selected plasticizers on peroxisome proliferator-activated receptor α (PPARα) and peroxisome proliferator-activated receptor γ (PPARγ) activity were performed. Furthermore, a H295R steroidogenesis assay was performed to determine the influence of DINCH® and its metabolites on estradiol or testosterone synthesis. Analysis of the experiments shows that the phthalates either stimulated or inhibited ERα and ERβ activity and inhibited AR activity, whereas the phthalate metabolites did not affect the activity of these human hormone receptors. In contrast, metabolites of di-(2-ethylhexyl) phthalate (DEHP) stimulated transactivation of the human PPARα and PPARγ in analogous reporter gene assays, although DEHP itself did not activate these nuclear receptors. Therefore, primary and secondary phthalate metabolites appear to exert different effects at the molecular level compared to the parent compounds. Similarly, the results showed that the phthalate-free plasticizer DINCH® itself did not affect the activity of ERα, ERβ, AR, PPARα and PPARγ, while the DINCH® metabolites were shown to activate all these receptors. In the case of AR, DINCH® metabolites mainly enhanced AR activity stimulated by dihydrotestosterone (DHT). In the H295R steroidogenesis assay, neither DINCH® nor any of its metabolites affected estradiol or testosterone synthesis. Primary and secondary metabolites of DINCH® thus exert different effects at the molecular level than DINCH® itself. However, all these in vitro effects of DINCH® metabolites were observed only at high concentrations, which were about three orders of magnitude higher than the reported DINCH® metabolite concentrations in human urine. Therefore, the in vitro data does not support the assumption that DINCH® or any of the metabolites studied could have significant endocrine effects in vivo at relevant exposure levels in humans. Following the demonstration of direct and indirect endocrine effects of the studied plasticizers, a new effect-based in vitro 3D screening tool for toxicity assays of non-genotoxic carcinogens was developed using estrogen receptor-negative (ER-) MCF10-A cells and estrogen receptor-positive (ER+) MCF-12A cells. This arose from the background that breast cancer is the most common cancer occurring in women and estrogenic substances, such as phthalates, can probably influence the disease. The human mammary epithelial cell lines MCF-10A and MCF-12A form well-differentiated acini-like structures when cultured in three-dimensional Matrigel culture for a period of 20 days. The model should make it possible to detect substance effects on cell differentiation and growth, on mammary cell acini, and to differentiate between estrogenic and non-estrogenic effects at the same time. In the present study, both cell lines were tested for their suitability as an effect-based in vitro assay system for non-genotoxic carcinogens. An Automated Acinus Detection And Morphological Evaluation (ADAME) software solution has been developed for automatic acquisition of acinus images and determination of morphological parameters such as acinus size, lumen size, and acinus roundness. Several test substances were tested for their ability to affect acinus formation and cellular differentiation. Human epithelial growth factor (EGF) stimulated acinus growth for both cell lines, while all trans retinoic acid (RA) inhibited acinar growth. The potent estrogen 17β-estradiol had no effect on acinus formation of MCF-10A cells but resulted in larger MCF-12A acini. Thus, the parallel use of both cell lines together with the developed high content screening and evaluation tool allows the rapid identification of the estrogenic and cancerogenic properties of a given test compound. The morphogenesis of the acini was only slightly affected by the test substances. On the one hand, this suggests a robust test system, on the other hand, it probably cannot detect low-potent estrogenic compounds such as phthalates or DINCH®. The advantage of the robustness of the system, however, may be that vast numbers of "positive" results with questionable biological relevance could be avoided, such as those observed in sensitive reporter gene assays.
Das Professionswissen einer Lehrkraft gilt als Voraussetzung für erfolgreichen Unterricht. Trotz großer Unterschiede der Professionswissensmodelle ist die Forschung sich aus theoretischer Sicht weitestgehend einig darüber, dass das fachliche und fachdidaktische Wissen wichtige Bestandteile des Professionswissens und damit bedeutsam für Unterrichtserfolg sind. Zurecht gibt es daher die Forderung, dass Lehrkräfte unter anderem ein ausgeprägtes fachliches Wissen benötigen, das sie in den verschiedensten Situationen ihres Berufslebens, wie z.B. dem Erklären von Konzepten und dem Planen von Unterricht einsetzen. Die Forschung untersucht aus diesem Grund schon seit über 30 Jahren die Bedeutung des Fachwissens einer Lehrkraft. Dabei werden die Betrachtungen des Fachwissens immer differenzierter. So hat sich in vielen Forschungsansätzen der Physikdidaktik eine Dreiteilung des Fachwissens in schulisches Wissen, vertieftes Schulwissen und universitäres Wissen durchgesetzt. Während das Schulwissen als jenes Wissen verstanden wird, das in der Schule gelehrt und gelernt wird, beschreibt die Facette des universitären Wissens die stark akademisch geprägte Wissensform, die zukünftige Physiklehrkräfte in den Fachveranstaltungen an der Universität erwerben sollen. Das vertiefte Schulwissen ist hingegen eine spezielle Form des fachlichen Wissens, die aus Forschungssicht als besonders wichtig für Lehrkräfte angenommen wird. Zusammengenommen sollen angehende Physiklehrkräfte alle genannten Facetten des Fachwissens, also Schulwissen, vertieftes Schulwissen und universitäres Wissen, während des Lehramtsstudiums Physik erwerben. Neben dem fachlichen Wissen benötigt eine Lehrkraft als wichtigen Bestandteil des Professionswissens auch noch fachdidaktisches Wissen, welches ebenfalls während des Studiums erworben werden soll. Gleichzeitig geht man in der Forschung davon aus, dass für die Entwicklung des fachdidaktischen Wissens fachliches Wissen eine Grundvoraussetzung ist. Es ist jedoch empirisch nahezu ungeklärt, wie sich das beschriebene Fachwissen und das fachdidaktische Wissen im Verlauf des Lehramtsstudiums Physik entwickeln oder wie sich diese Wissensformen gegenseitig beeinflussen. Darüber hinaus ist unklar, welche Herausforderungen sich aus der Leistungsheterogenität der Studienanfänger:innen ergeben. Bisherige Untersuchungen aus der Studienerfolgsforschung legen nahe, dass besonders das Vorwissen prognostisch für Studienerfolg ist. Die vorliegende Arbeit untersucht daher zunächst, wie sich das fachliche Wissen (Schulwissen, vertieftes Schulwissen, universitäres Wissen) von Lehrkräften im Verlauf des Bachelor- und Masterstudiums entwickelt. In einem nächsten Schritt wurde untersucht, wie sich Studierende mit einem geringen, mittleren bzw. hohen Fachwissen zum Beginn des Studiums über das Bachelorstudium entwickeln. Darüber hinaus wurde die Entwicklung des fachdidaktischen Wissens betrachtet und Zusammenhänge zum fachlichen Wissen in den Blick genommen. Durchgeführt wurde die vorliegende Studie im Längsschnitt im Verlauf von drei Jahren an 11 Hochschulen mit 145 Bachelorstudierenden und 73 Masterstudierenden. Die Bachelorstudierenden haben jährlich an einer Testung des fachlichen und fachdidaktischen Wissens teilgenommen. Die Masterstudierenden nahmen jeweils vor und nach einem einsemestrigen Schulpraktikum an den Erhebungen teil. Zur Testung wurde jeweils ein schriftliches Testinstrument verwendet. Das weiterentwickelte Fachwissensinstrument wurde zusätzlich ausführlichen Validierungsuntersuchungen unterzogen. Die Ergebnisse zeigen, dass sich das Schulwissen, das vertiefte Schulwissen und das universitäre Wissen sowohl im Bachelor- als auch Masterstudium signifikant weiterentwickeln. Auch für das fachdidaktische Wissen können signifikante Zuwächse über das Bachelor- und Masterstudium berichtet werden. Interessant ist dabei, dass eine starke Korrelation zwischen dem fachlichen Wissen zu Beginn des Studiums und dem Zuwachs des fachdidaktischen Wissens vom ersten zum dritten Semester erkennbar ist. Es liegen also erste Hinweise dafür vor, dass – wie in der Forschung vermutet – das fachliche Wissen eine Voraussetzung für die Entwicklung von fachdidaktischem Wissen ist. Die angesprochene Leistungsheterogenität zu Beginn des Studiums stellt dabei jedoch ein Hindernis für die Entwicklung des fachlichen Wissens dar. So holt die Gruppe der zu Beginn schwächeren Studierenden nicht einmal das Mittelfeld im Lauf des Studiums ein. Gleichzeitig ist zu beobachten, dass die Gruppe der stärksten Studierenden im Vergleich zu den übrigen Studierenden vom ersten zum dritten Semester überproportional dazulernt. Insgesamt bleibt das heterogene Leistungsbild im Verlauf des Studiums erhalten, was die Forderung nach Unterstützung für leistungsschwächere Studierende gerade zu Beginn des Studiums betont. Wie sich innerhalb der vorliegenden Untersuchung zeigte, könnte insbesondere ein ausgeprägtes mathematisches Vorwissen hilfreich sein, um fachliches Wissen zu entwickeln. Die bisher angebotenen Vorkurse scheinen dem Bedarf nicht gerecht zu werden und so könnte es lohnenswert sein, zusätzliche Veranstaltungen auch in Bezug auf fachliches Wissen in der gesamten Studieneingangsphase anzubieten. Forschungsergebnisse deuten darauf hin, dass insbesondere schwächere Studierende von einer klaren Strukturierung innerhalb dieser zusätzlichen Kurse profitieren könnten. Auch ein allgemeines Vorstudium könnte helfen, die Vorkenntnisse anzugleichen.
Hacker und Haecksen zählen zur Avantgarde der Computerisierung. Seit den späten 1970er-Jahren bildeten sie sich in der Bundesrepublik und in der DDR zu eigensinnigen ComputernutzerInnen mit einschlägigem Wissen heraus. Sie eigneten sich das Medium spielerisch an, schufen Kontakträume und brachten sich so aktiv in den Prozess der Computerisierung ein. Durch ihre Grenzüberschreitungen zeigten sie dabei Chancen und Risiken der Digitalisierung auf.
Julia Gül Erdogan geht der Entstehung der Hackerkulturen in Ost- und Westdeutschland nach. Sie analysiert, wie deren teils subversive Praktiken Machtgefüge in Politik, Wirtschaft und Gesellschaft herausforderten. Zugleich verdeutlicht die Arbeit Gemeinsamkeiten und Unterschiede der frühen sub- und gegenkulturellen Computernutzung in den beiden deutschen Teilstaaten.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
Essays on Macroeconomics
(2021)
This dissertation consists of four self-contained papers. Each paper deals with a specific macroeconomic question. The first paper assesses the distributional implications of environmental policies from a general equilibrium macroeconomic perspective. I develop a New-Keynesian model with several types of uncertainties and frictions that incorporates liquidity constrained households. The model is calibrated to match the German economy and the numerical results show that climate policy instruments can be associated with regressive welfare effects. Furthermore, the analysis shows that these effects can be mitigated through an appropriate revenue recycling scheme. The second paper deals with short-run inequality dynamics within a real business cycle model. An empirical evaluation shows that the cyclical components of income inequality, the capital share and real GDP are correlated. We develop tractable representation of common inequality indicators in the general equilibrium model and show that the observed pattern is driven by innovations in the capital share. A Bayesian estimation of the model for the United States with data for the period 1948 to 2017 indicates that the model provides a reasonable fit for the data and successfully replicates the observed pattern of cyclical correlations. The third paper empirically examines the effects of banking regulation on the risk-relationship between sovereigns and banks. Based on a comprehensive data set of the European banking sector, we find that the implementation of the novel European banking regulation framework significantly contributed to a weakening of the risk-link between sovereigns and banks.The fourth paper empirically examines the role of institutional experience for institutional development in transition economies. To capture institutional experience, we develop a novel index, based on historical country records. The results of cross-sectional and panel estimations suggest that institutional experience helps to explain the divergent economic and institutional development in transition economies after the dissolution of the Soviet Union.
El flanco oriental de los Andes Centrales en el noroeste argentino es una zona caracterizada por serranías limitadas por fallas inversas que conforman un orógeno de piel gruesa activo con un patrón espacio-temporal no sistemático de deformación contraccional. Este patrón queda representado tanto por la dispersión de la actividad sísmica cortical como de la localización de las estructuras cuaternarias a través de la Cordillera Oriental y el Sistema de Santa Bárbara, configurando un frente orogénico difuso de más de 200 km de extensión. El estudio de la actividad neotectónica en esta región ha tomado más relevancia en los últimos años, mediante la aplicación de herramientas variadas, incluyendo técnicas de geomorfología tectónica, herramientas de teledetección, geodesia y estudios de campo convencionales. Los depósitos lacustres han demostrado ser, en numerosos ejemplos, excelentes marcadores de la actividad tectónica, dadas la horizontalidad original de sus capas y la susceptibilidad a los cambios del entorno. Es por ello que en este trabajo se analizaron los depósitos lacustres que afloran en el sector central de los valles Calchaquíes (región de Cafayate), para comprender cómo se acomoda la deformación cuaternaria en una de las cuencas intermontanas de la cuña orogénica activa.
El rumbo de las estructuras cuaternarias en el área de estudio es subparalelo al de las fallas que exhuman los cordones serranos circundantes. A partir del estudio estratigráfico, morfotectónico y estructural de los depósitos lacustres, se identificó un mínimo de cinco episodios de deformación afectando a la columna estratigráfica cuaternaria. Integrando perfiles estructurales balanceados con edades obtenidas en este trabajo y recopiladas de la bibliografía, se calcularon para el Pleistoceno mediotardío, tasas mínimas y máximas de acortamiento que varían entre 0,19-2,80 y 0,21-4,47 mm/a, respectivamente. Para comparar estos resultados con mediciones de la tectónica activa a escala regional se recopilaron datos de estaciones geodésicas del noroeste argentino, con los cuales se elaboró un perfil de velocidades horizontales. El perfil obtenido muestra un decrecimiento gradual de los vectores hacia el este, indicando actividad interna del orógeno en congruencia con los registros de actividad sísmica y compilación regional de las estructuras cuaternarias.
Además de la caracterización neotectónica de este sector de la Cordillera Oriental, el análisis estratigráfico de los depósitos lacustres ha permitido refinar la evolución geológica del sector central de los valles Calchaquíes durante el Cuaternario. De esta manera se han identificado al menos siete episodios de inundación lacustre relacionados con la desconexión del sistema fluvial con su nivel de base, dando lugar a sucesivos eventos de agradación y erosión. Las cotas máximas alcanzadas por los paleolagos, en conjunto con un modelo hidrológico previamente publicado para esta región, permitieron asimismo efectuar una comparación con el registro paleoclimático regional.
Los resultados de esta tesis representan un aporte significativo al conocimiento de la evolución tectónica y estratigráfica del sector central de los valles Calchaquíes durante el Cuaternario. Por otra parte, su integración a escala regional contribuye a comprender mejor la dinámica de la deformación en la cuña orogénica de piel gruesa del noroeste argentino.
This project describes the nominal, verbal and ‘truncation’ systems of Awing and explains the syntactic and semantic functions of the multifunctional l<-><-> (LE) morpheme in copular and wh-focused constructions. Awing is a Bantu Grassfields language spoken in the North West region of Cameroon. The work begins with morphological processes viz. deverbals, compounding, reduplication, borrowing and a thorough presentation of the pronominal system and takes on verbal categories viz. tense, aspect, mood, verbal extensions, negation, adverbs and triggers of a homorganic N(asal)-prefix that attaches to the verb and other verbal categories. Awing grammar also has a very unusual phenomenon whereby nouns and verbs take long and short forms. A chapter entitled truncation is dedicated to the phenomenon. It is observed that the truncation process does not apply to bare singular NPs, proper names and nouns derived via morphological processes. On the other hand, with the exception of the 1st person non-emphatic possessive determiner and the class 7 noun prefix, nouns generally take the truncated form with modifiers (i.e., articles, demonstratives and other possessives). It is concluded that nominal truncation depicts movement within the DP system (Abney 1987). Truncation of the verb occurs in three contexts: a mass/plurality conspiracy (or lattice structuring in terms of Link 1983) between the verb and its internal argument (i.e., direct object); a means to align (exhaustive) focus (in terms of Fery’s 2013), and a means to form polar questions.
The second part of the work focuses on the role of the LE morpheme in copular and wh-focused clauses. Firstly, the syntax of the Awing copular clause is presented and it is shown that copular clauses in Awing have ‘subject-focus’ vs ‘topic-focus’ partitions and that the LE morpheme indirectly relates such functions. Semantically, it is shown that LE does not express contrast or exhaustivity in copular clauses. Turning to wh-constructions, the work adheres to Hamblin’s (1973) idea that the meaning of a question is the set of its possible answers and based on Rooth’s (1985) underspecified semantic notion of alternative focus, concludes that the LE morpheme is not a Focus Marker (FM) in Awing: LE does not generate or indicate the presence of alternatives (Krifka 2007); The LE morpheme can associate with wh-elements as a focus-sensitive operator with semantic import that operates on the focus alternatives by presupposing an exhaustive answer, among other notions. With focalized categories, the project further substantiates the claim in Fominyam & Šimík (2017), namely that exhaustivity is part of the semantics of the LE morpheme and not derived via contextual implicature, via a number of diagnostics. Hence, unlike in copular clauses, the LE morpheme with wh-focused categories is analysed as a morphological exponent of a functional head Exh corresponding to Horvath's (2010) EI (Exhaustive Identification). The work ends with the syntax of verb focus and negation and modifies the idea in Fominyam & Šimík (2017), namely that the focalized verb that associates with the exhaustive (LE) particle is a lower copy of the finite verb that has been moved to Agr. It is argued that the LE-focused verb ‘cluster’ is an instantiation of adjunction. The conclusion is that verb doubling with verb focus in Awing is neither a realization of two copies of one and the same verb (Fominyam and Šimík 2017), nor a result of a copy triggered by a focus marker (Aboh and Dyakonova 2009). Rather, the focalized copy is said to be merged directly as the complement of LE forming a type of adjoining cluster.
In my doctoral thesis, I examine continuous gravity measurements for monitoring of the geothermal site at Þeistareykir in North Iceland. With the help of high-precision superconducting gravity meters (iGravs), I investigate underground mass changes that are caused by operation of the geothermal power plant (i.e. by extraction of hot water and reinjection of cold water). The overall goal of this research project is to make a statement about the sustainable use of the geothermal reservoir, from which also the Icelandic energy supplier and power plant operator Landsvirkjun should benefit.
As a first step, for investigating the performance and measurement stability of the gravity meters, in summer 2017, I performed comparative measurements at the gravimetric observatory J9 in Strasbourg. From the three-month gravity time series, I examined calibration, noise and drift behaviour of the iGravs in comparison to stable long-term time series of the observatory superconducting gravity meters. After preparatory work in Iceland (setup of gravity stations, additional measuring equipment and infrastructure, discussions with Landsvirkjun and meetings with the Icelandic partner institute ISOR), gravity monitoring at Þeistareykir was started in December 2017. With the help of the iGrav records of the initial 18 months after start of measurements, I carried out the same investigations (on calibration, noise and drift behaviour) as in J9 to understand how the transport of the superconducting gravity meters to Iceland may influence instrumental parameters.
In the further course of this work, I focus on modelling and reduction of local gravity contributions at Þeistareykir. These comprise additional mass changes due to rain, snowfall and vertical surface displacements that superimpose onto the geothermal signal of the gravity measurements. For this purpose, I used data sets from additional monitoring sensors that are installed at each gravity station and adapted scripts for hydro-gravitational modelling. The third part of my thesis targets geothermal signals in the gravity measurements.
Together with my PhD colleague Nolwenn Portier from France, I carried out additional gravity measurements with a Scintrex CG5 gravity meter at 26 measuring points within the geothermal field in the summers of 2017, 2018 and 2019. These annual time-lapse gravity measurements are intended to increase the spatial coverage of gravity data from the three continuous monitoring stations to the entire geothermal field. The combination of CG5 and iGrav observations, as well as annual reference measurements with an FG5 absolute gravity meter represent the hybrid gravimetric monitoring method for Þeistareykir. Comparison of the gravimetric data to local borehole measurements (of groundwater levels, geothermal extraction and injection rates) is used to relate the observed gravity changes to the actually extracted (and reinjected) geothermal fluids. An approach to explain the observed gravity signals by means of forward modelling of the geothermal production rate is presented at the end of the third (hybrid gravimetric) study. Further modelling with the help of the processed gravity data is planned by Landsvirkjun. In addition, the experience from time-lapse and continuous gravity monitoring will be used for future gravity measurements at the Krafla geothermal field 22 km south-east of Þeistareykir.
An ever-increasing number of prediction models is published every year in different medical specialties. Prognostic or diagnostic in nature, these models support medical decision making by utilizing one or more items of patient data to predict outcomes of interest, such as mortality or disease progression. While different computer tools exist that support clinical predictive modeling, I observed that the state of the art is lacking in the extent to which the needs of research clinicians are addressed. When it comes to model development, current support tools either 1) target specialist data engineers, requiring advanced coding skills, or 2) cater to a general-purpose audience, therefore not addressing the specific needs of clinical researchers. Furthermore, barriers to data access across institutional silos, cumbersome model reproducibility and extended experiment-to-result times significantly hampers validation of existing models. Similarly, without access to interpretable explanations, which allow a given model to be fully scrutinized, acceptance of machine learning approaches will remain limited. Adequate tool support, i.e., a software artifact more targeted at the needs of clinical modeling, can help mitigate the challenges identified with respect to model development, validation and interpretation. To this end, I conducted interviews with modeling practitioners in health care to better understand the modeling process itself and ascertain in what aspects adequate tool support could advance the state of the art. The functional and non-functional requirements identified served as the foundation for a software artifact that can be used for modeling outcome and risk prediction in health research. To establish the appropriateness of this approach, I implemented a use case study in the Nephrology domain for acute kidney injury, which was validated in two different hospitals. Furthermore, I conducted user evaluation to ascertain whether such an approach provides benefits compared to the state of the art and the extent to which clinical practitioners could benefit from it. Finally, when updating models for external validation, practitioners need to apply feature selection approaches to pinpoint the most relevant features, since electronic health records tend to contain several candidate predictors. Building upon interpretability methods, I developed an explanation-driven recursive feature elimination approach. This method was comprehensively evaluated against state-of-the art feature selection methods. Therefore, this thesis' main contributions are three-fold, namely, 1) designing and developing a software artifact tailored to the specific needs of the clinical modeling domain, 2) demonstrating its application in a concrete case in the Nephrology context and 3) development and evaluation of a new feature selection approach applicable in a validation context that builds upon interpretability methods. In conclusion, I argue that appropriate tooling, which relies on standardization and parametrization, can support rapid model prototyping and collaboration between clinicians and data scientists in clinical predictive modeling.
Angular momentum is a particularly sensitive probe into stellar evolution because it changes significantly over the main sequence life of a star. In this thesis, I focus on young main sequence stars of which some feature a rapid evolution in their rotation rates. This transition from fast to slow rotation is inadequately explored observationally and this work aims to provide insights into the properties and time scales but also investigates stellar rotation in young open clusters in general.
I focus on the two open clusters NGC 2516 and NGC 3532 which are ~150 Myr (zero-age main sequence age) and ~300 Myr old, respectively. From 42 d-long time series photometry obtained at the Cerro Tololo Inter-American Observatory, I determine stellar rotation periods in both clusters. With accompanying low resolution spectroscopy, I measure radial velocities and chromospheric emission for NGC 3532, the former to establish a clean membership and the latter to probe the rotation-activity connection.
The rotation period distribution derived for NGC 2516 is identical to that of four other coeval open clusters, including the Pleiades, which shows the universality of stellar rotation at the zero-age main sequence. Among the similarities (with the Pleiades) the "extended slow rotator sequence" is a new, universal, yet sparse, feature in the colour-period diagrams of open clusters. From a membership study, I find NGC 3532 to be one of the richest nearby open clusters with 660 confirmed radial velocity members and to be slightly sub-solar in metallicity. The stellar rotation periods for NGC 3532 are the first published for a 300 Myr-old open cluster, a key age to understand the transition from fast to slow rotation. The fast rotators at this age have significantly evolved beyond what is observed in NGC 2516 which allows to estimate the spin-down timescale and to explore the issues that angular momentum models have in describing this transition. The transitional sequence is also clearly identified in a colour-activity diagram of stars in NGC 3532. The synergies of the chromospheric activity and the rotation periods allow to understand the colour-activity-rotation connection for NGC 3532 in unprecedented detail and to estimate additional rotation periods for members of NGC 3532, including stars on the "extended slow rotator sequence".
In conclusion, this thesis probes the transition from fast to slow rotation but has also more general implications for the angular momentum evolution of young open clusters.
Simultaneously speculative and inspired by everyday experiences, this volume develops an aesthetics of metabolism that offers a new perspective on the human-environment relation, one that is processual, relational, and not dependent on conscious thought. In art installations, design prototypes, and researchcreation projects that utilize air, light, or temperature to impact subjective experience the author finds aesthetic milieus that shift our awareness to the role of different sense modalities in aesthetic experience. Metabolic and atmospheric processes allow for an aesthetics besides and beyond the usually dominant visual sense.
Der Verfasser beschäftigt sich mit der Frage des Glaubensübertritts in einem Asylverfahren. Dabei nimmt er Zeitpunkt, Art und Umstände des Religionswechsels in den Blick. Ferner untersucht er, wie die sogenannte Konversion von den zuständigen Behörden und Gerichten zu behandeln und zu bewerten ist. Einführend gibt er einen Überblick zum völkerrechtlichen Schutz der Religions- und Weltanschauungsfreiheit sowie typischen Gefährdungslagen. Überdies befasst er sich mit den Rechtsgrundlagen des Asyl- und Flüchtlingsschutzrechts und stellt Verbindungen zum Flucht- und Verfolgungsgrund der Religion her.
Schwerpunkt bildet die Untersuchung der Verfahrensstadien, in denen die Konversion relevant wird. Dabei berücksichtigt der Verfasser die nationale und europäische Rechtsprechung. Von besonderer Bedeutung sind die Ausführungen zum Zusammenspiel von staatlichen Ermittlungspflichten und Mitwirkungsgeboten von Asylantragstellenden, wobei den Besonderheiten des grund- und menschenrechtlichen Mehrebenensystems Rechnung getragen wird.
Zentral sind ferner die Ausführungen zum Umgang mit Taufurkunden und sonstigen Bescheinigungen über die religiöse Überzeugung. Besonderes Gewicht liegt auf der verfassungsrechtlichen Stellung der Religionsgemeinschaften und der Frage, ob die Entscheidung einer Religionsgemeinschaft, ein neues Mitglied aufzunehmen, die Behörde im Asylverfahren bindet. Diesem Problem widmet sich der Verfasser unter Heranziehung der relevanten Literaturstimmen und einschlägigen Rechtsprechung.
Der rechtswissenschaftliche Beitrag bietet den beteiligten Akteuren nicht nur eine Einführung in das Themengebiet des Glaubensübertritts im Asylverfahren, sondern gibt den Lesenden auch eine praxistaugliche Handlungsunterstützung rund um die wichtigsten Fragen einer Konversion im Asylverfahren an die Hand. Praktische Bezüge entstehen beispielsweise dadurch, dass wichtige Impulse und Empfehlungen für eine gleichermaßen moderne, rechtsstaatliche und grundrechtsorientierte Verfahrensführung entwickelt werden.
Das Regulierungsermessen
(2021)
Die Arbeit untersucht die Übertragung des zum Telekommunikationsrecht entwickelten Regulierungsermessens auf das Energiewirtschaftsrecht und kommt zu dem Ergebnis, dass es für die dortigen Normstrukturen ungeeignet ist und zu Rechtsschutzeinbußen geführt hat. Da auch die herkömmlichen verwaltungsrechtlichen Dogmen für die methodenbasierten Entscheidungsformen der Regulierungsbehörden keine geeigneten Instrumente bieten, wird das Subsumtionsermessen in den Diskurs eingeführt, um die spezifische, auf quasi Wettbewerbsherstellung gerichtete Verwaltungstätigkeit in der Energieregulierung besser abzubilden. Ohnehin steht die deutsche Energieregulierungspraxis vor einem Umbruch: Der EuGH wird vermutlich die Ansicht der Kommission bestätigen, dass die verordnungsrechtliche Vorsteuerung der Entgeltregulierung gegen Art. 37 Abs. 1 lit. a und Art. 37 Abs. 6 lit. a und b der Richtlinie 2009/72/EG bzw. 2009/73/EG verstößt.
Background and objectives: The intricate interdependencies between the musculoskeletal and neural systems build the foundation for postural control in humans, which is a prerequisite for successful performance of daily and sports-specific activities. Balance training (BT) is a well-established training method to improve postural control and its components (i.e., static/dynamic steady-state, reactive, proactive balance). The effects of BT have been studied in adult and youth populations, but were systematically and comprehensively assessed only in young and old adults. Additionally, when taking a closer look at established recommendations for BT modalities (e.g., training period, frequency, volume), standardized means to assess and control the progressive increase in exercise intensity are missing. Considering that postural control is primarily neuronally driven, intensity is not easy to quantify. In this context, a measure of balance task difficulty (BTD) appears to be an auspicious alternative as a training modality to monitor BT and control training progression. However, it remains unclear how a systematic increase in BTD affects balance performance and neurophysiological outcomes. Therefore, the primary objectives of the present thesis were to systematically and comprehensively assess the effects of BT on balance performance in healthy youth and establish dose-response relationships for an adolescent population. Additionally, this thesis aimed to investigate the effects of a graded increase in BTD on balance performance (i.e., postural sway) and neurophysiological outcomes (i.e, leg muscle activity, leg muscle coactivation, cortical activity) in adolescents.
Methods: Initially, a systematic review and meta-analysis on the effects of BT on balance performance in youth was conducted per the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. Following this complementary analysis, thirteen healthy adolescents (3 female/ 10 male) aged 16-17 years were enrolled for two cross-sectional studies. The participants executed bipedal balance tasks on a multidirectional balance board that allowed six gradually increasing levels of BTD by narrowing the balance boards’ base of support. During task performance, two pressure sensitive mats fixed on the balance board recorded postural sway. Leg muscle activity and leg muscle coactivation were assessed via electromyography while electroencephalography was used to monitor cortical activity.
Results: Findings from the systematic review and meta-analysis indicated moderate-to-large effects of BT on static and dynamic balance performance in youth (static: weighted mean standardized mean differences [SMDwm] = 0.71; dynamic: SMDwm = 1.03). In adolescents, training-induced effects were moderate and large for static (SMDwm = 0.61) and dynamic (SMDwm = 0.86) balance performance, respectively. Independently (i.e. modality-specific) calculated dose-response relationships identified a training period of 12 weeks, a frequency of two training sessions per week, a total of 24-36 sessions, a duration of 4-15 minutes, and a total duration of 31-60 minutes as the training modalities with the largest effect on overall balance performance in adolescents. However, the implemented meta-regression indicated that none of these training modalities (R² = 0%) could predict the observed performance-increasing effects of BT.
Results from the first cross-sectional study revealed that a gradually increasing level of BTD caused increases in postural sway (p < 0.001; d = 6.36), higher leg muscle activity (p < 0.001; 2.19 < d < 4.88), and higher leg muscle coactivation (p < 0.001; 1.32 < d < 1.41). Increases in postural sway and leg muscle activity were mainly observed during low and high levels of task difficulty during continuous performance of the respective balance task. Results from the second cross-sectional study indicated frequency-specific increases/decreases in cortical activity of different brain areas (p < 0.005; 0.92 < d < 1.80) as a function of BTD. Higher cortical activity within the theta frequency band in the frontal and central right brain areas was observed with increasing postural demands. Concomitantly, activity in the alpha-2 frequency band was attenuated in parietal brain areas.
Conclusion: BT is an effective method to increase static and dynamic balance performance and, thus, improve postural control in healthy youth populations. However, none of the reported training modalities (i.e., training period, frequency, volume) could explain the effects on balance performance. Furthermore, a gradually increasing level of task difficulty resulted in increases in postural sway, leg muscle activity, and coactivation. Frequency and brain area-specific increases/decreases in cortical activity emphasize the involvement of frontoparietal brain areas in regulatory processes of postural control dependent on BTD. Overall, it appears that increasing BTD can be easily accomplished by narrowing the base of support. Since valid methods to assess and quantify BT intensity do not exist, increasing BTD appears to be a very useful candidate to implement and monitor progression in BT programs in healthy adolescents.
Selfsustained oscillations are some of the most commonly observed phenomena in biological systems. They emanate from non-linear systems in a heterogeneous environment and can be described by the theory of dynamical systems. Part of this theory considers reduced models of the oscillator dynamics by means of amplitudes and a phase variable. Such variables are highly attractive for theoretical and experimental studies. Theoretically these variables correspond to an integrable linearization of the generally non-linear system. Experimentally, there exist well established approaches to extract phases from oscillator signals. Notably, one can define phase models also for networks of oscillators. One highly active field examines effects of non-local coupling among oscillators, which is thought to play a key role in networks with strong coupling. The dissertation introduces and expands the knowledge about high-order phase coupling in networks of oscillators. Mathematical calculations consider the Stuart-Landau oscillator. A novel phase estimation scheme for direct observations of an oscillator dynamics is introduced based on numerics. A numerical study of high-order phase coupling applies a Fourier fit for the Stuart-Landau and for the van-der-Pol oscillator. The numerical approach is finally tested on observation-based phase estimates of the Morris-Lecar neuron. A popular approach for the construction of phases from signals is based on phase demodulation by means of the Hilbert transform. Generally, observations of oscillations contain a small and generic variation of their amplitude. The work presents a way to quantify how much the variations of signal amplitude spoil a phase demodulation procedure. For the ideal case of phase modulated signals, amplitude modulations vanish. However, the Hilbert transform produces artificial variations of the reconstructed amplitude even in this case. The work proposes a novel procedure called Iterative Hilbert Transform Embedding to obtain an optimal demodulation of signals. The text presents numerous examples and tests of application for the method, covering multicomponent signals, observables of highly stable limit cycle oscillations and noisy phase dynamics. The numerical results are supported by a spectral theory of convergence for weak phase modulations.
Proteins of halophilic organisms that accumulate molar concentrations of KCl in their cytoplasm have much higher content in acidic amino acids than proteins of mesophilic organisms. It has been proposed that this excess is necessary to maintain proteins hydrated in an environment with low water activity: either via direct interactions between water and the carboxylate groups of acidic amino acids or via cooperative interactions between acidic amino acids and hydrated cations, which would stabilize the folded protein. In the course of this Ph.D. study, we investigated these possibilities using atomistic molecular dynamics simulations and classical force fields. High quality parameters describing the interaction between K+ and carboxylate groups present in acidic amino acids are indispensable for this study. We first evaluated the quality of the default parameters for these ions within the widely used AMBER ff14SB force field for proteins and found that they perform poorly. We propose new parameters, which reproduce solution activity derivatives of potassium acetate solutions up to 2 mol/kg and the distances between potassium ions and carboxylate groups observed in x-ray structures of proteins. To understand the role of acidic amino acids in protein hydration, we investigated this aspect for 5 halophilic proteins in comparison with 5 mesophilic ones. Our results do not support the necessity of acidic amino acids to keep folded proteins hydrated. Proteins with a larger fraction of acidic amino acids indeed have higher hydration levels. However, the hydration level of each protein is identical at low (b_KCl = 0.15 mol/kg) and high (b_KCl = 2 mol/kg) KCl concentration. It has also been proposed that cooperative interactions between acidic amino acids with nearby hydrated cations stabilize the folded protein and slow down its solvation shell; according to this theory, the cations would be preferentially excluded from the unfolded structure. We investigate this possibility through extensive free energy calculation simulations. We find that cooperative interactions between neighboring acidic amino acids exist and are mediated by the ions in solution but are present in both folded and unfolded structures of halophilic proteins. The translational dynamics of the solvation shell is barely distinguishable between halophilic and mesophilic proteins; therefore, such a cooperative effect does not result in unusually slow solvent dynamics as has been suggested.
Rheology describes the flow of matter under the influence of stress, and - related to solids- it investigates how solids subjected to stresses deform. As the deformation of the Earth’s outer layers, the lithosphere and the crust, is a major focus of rheological studies, rheology in the geosciences describes how strain evolves in rocks of variable composition and temperature under tectonic stresses. It is here where deformation processes shape the form of ocean basins and mountain belts that ultimately result from the complex interplay between lithospheric plate motion and the susceptibility of rocks to the influence of plate-tectonic forces. A rigorous study of the strength of the lithosphere and deformation phenomena thus requires in-depth studies of the rheological characteristics of the involved materials and the temporal framework of deformation processes.
This dissertation aims at analyzing the influence of the physical configuration of the lithosphere on the present-day thermal field and the overall rheological characteristics of the lithosphere to better understand variable expressions in the formation of passive continental margins and the behavior of strike-slip fault zones. The main methodological approach chosen is to estimate the present-day thermal field and the strength of the lithosphere by 3-D numerical modeling. The distribution of rock properties is provided by 3-D structural models, which are used as the basis for the thermal and rheological modeling. The structural models are based on geophysical and geological data integration, additionally constrained by 3-D density modeling. More specifically, to decipher the thermal and rheological characteristics of the lithosphere in both oceanic and continental domains, sedimentary basins in the Sea of Marmara (continental transform setting), the SW African passive margin (old oceanic crust), and the Norwegian passive margin (young oceanic crust) were selected for this study.
The Sea of Marmara, in northwestern Turkey, is located where the dextral North Anatolian Fault zone (NAFZ) accommodates the westward escape of the Anatolian Plate toward the Aegean. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the lateral crustal heterogeneities is presented for the first time in this study. Here, I use different gravity datasets and the general non-uniqueness in potential field modeling, to propose three possible end-member scenarios of crustal configuration. The models suggest that pronounced gravitational anomalies in the basin originate from significant density heterogeneities within the crust. The rheological modeling reveals that associated variations in lithospheric strength control the mechanical segmentation of the NAFZ. Importantly, a strong crust that is mechanically coupled to the upper mantle spatially correlates with aseismic patches where the fault bends and changes its strike in response to the presence of high-density lower crustal bodies. Between the bends, mechanically weaker crustal domains that are decoupled from the mantle are characterized by creep.
For the passive margins of SW Africa and Norway, two previously published 3-D conductive and lithospheric-scale thermal models were analyzed. These 3-D models differentiate various sedimentary, crustal, and mantle units and integrate different geophysical data, such as seismic observations and the gravity field. Here, the rheological modeling suggests that the present-day lithospheric strength across the oceanic domain is ultimately affected by the age and past thermal and tectonic processes as well as the depth of the thermal lithosphere-asthenosphere boundary, while the configuration of the crystalline crust dominantly controls the rheological behavior of the lithosphere beneath the continental domains of both passive margins.
The thermal and rheological models show that the variations of lithospheric strength are fundamentally influenced by the temperature distribution within the lithosphere. Moreover, as the composition of the lithosphere significantly influences the present-day thermal field, it therefore also affects the rheological characteristics of the lithosphere. Overall my studies add to our understanding of regional tectonic deformation processes and the long-term behavior of sedimentary basins; they confirm other analyses that have pointed out that crustal heterogeneities in the continents result in diverse lithospheric thermal characteristics, which in turn results in higher complexity and variations of rheological behavior compared to oceanic domains with a thinner, more homogeneous crust.
The prevalence of diseases associated with misfolded proteins increases with age. When cellular defense mechanisms become limited, misfolded proteins form aggregates and may also develop more stable cross-β structures ultimately forming amyloid aggregates. Amyloid aggregates are associated with neurodegenerative diseases such as Alzheimer’s disease and Huntington’s disease. The formation of amyloid deposits, their toxicity and cellular defense mechanisms have been intensively studied. However, surprisingly little is known about the effects of protein aggregates on cellular signal transduction. It is also not understood whether the presence of aggregation-prone, but still soluble proteins affect signal transduction.
In this study, the still soluble aggregation-prone HttExon1Q74 and its amyloid aggregates were used to analyze the effect of amyloid aggregates on internalization and receptor activation of G protein-coupled receptors (GPCRs), the largest protein family of mammalian cell surface receptors involved in signal transduction. The aggregated HttExon1Q74, but not its soluble form, could inhibit ligand-induced clathrin-mediated endocytosis (CME) of various GPCRs. Most likely this inhibitory effect is based on a terminal sequestration of the HSC70 chaperone to the aggregates which is necessary for CME. Using the vasopressinV1a receptor (V1aR) and the corticotropin-releasing factor receptor 1 (CRF1R) as a model, it could be shown that the presence of HttExon1Q74 aggregates and the inhibition of ligand-induced CME leads to an accumulation of desensitized receptors at the plasma membrane. In turn, this disrupts Gq-mediated Ca2+ signaling and Gs-mediated cAMP signaling of the V1aR and the CRF1R respectively. In contrast to HttExon1Q74 amyloid aggregates, soluble HttExon1Q74 as well as amorphous aggregates did not inhibit GPCR internalization and signaling demonstrating that cellular signal transduction mechanisms are specifically impaired in response to the formation of amyloid aggregates.
In addition, preliminary experiments could show that HttExon1Q74 aggregates provoke an increase in membrane expression of a protein from a structurally and functionally unrelated membrane protein family, namely the serotonin transporter SERT. As SERT is the main pharmacological target to treat depression this could shed light on this commonly occurring comorbidity in neurodegenerative diseases, in particular in early disease states.
Alix Giraud-Willer untersucht die Berechtigung starrer Mindeststrafen auf rechtsvergleichender Basis. Sie schränken den Entscheidungsspielraum des Richters bei der Strafzumessung erheblich ein. Absolute Strafen, eine extreme Ausprägung starrer Mindeststrafen, schließen einen richterlichen Entscheidungsspielraum im Grundsatz sogar gänzlich aus. Während das deutsche Strafrecht starre Mindeststrafen, einschließlich absoluter Strafen, vorsieht, nahm das französische Recht von starren (erhöhten) Mindeststrafen inzwischen Abstand. Die Autorin untersucht die Wechselwirkungen zwischen gesetzlicher Fixierung hoher Strafen, Reaktionen der Strafpraxis und gesetzlicher Lockerung der Strafdrohungen in beiden Rechtsordnungen. Durch ihren Blick auf zwei Jurisdiktionen bietet sie Erklärungsansätze für bestimmte Erscheinungen des geltenden Sanktionenrechts sowie Denkanstöße für seine Reformierung an.
Das Gewichtsstigma und insbesondere das internalisierte Gewichtsstigma sind bei Kindern und Jugendlichen mit negativen Folgen für die physische und psychische Gesundheit assoziiert. Da die Befundlage in diesem Altersbereich jedoch noch unzureichend ist, war es das Ziel der Dissertation, begünstigende Faktoren und Folgen von gewichtsbezogener Stigmatisierung und internalisiertem Gewichtsstigma bei Kindern und Jugendlichen zu untersuchen. Die Analysen basierten auf zwei großen Stichproben, die im Rahmen der prospektiven PIER-Studie an Schulen rekrutiert wurden. Die erste Publikation bezieht sich auf eine Stichprobe mit Kindern und Jugendlichen im Alter zwischen 9 und 19 Jahren (49.2 % weiblich) und untersuchte den prospektiven bidirektionalen Zusammenhang zwischen erlebter Gewichtsstigmatisierung und Gewichtsstatus anhand eines latenten Strukturgleichungsmodells über drei Messzeitpunkte hinweg. Die anderen beiden Publikationen beziehen sich auf eine Stichprobe mit Kindern und Jugendlichen im Alter zwischen 6 und 11 Jahren (51.1 % weiblich). Die zweite Publikation analysierte anhand einer hierarchischen Regression, welche intrapersonalen Risikofaktoren das internalisierte Gewichtsstigma prospektiv prädizieren. Die dritte Publikation untersuchte anhand von ROC-Kurven, ab welchem Ausmaß das internalisierte Gewichtsstigma mit einem erhöhten Risiko für psychosoziale Auffälligkeit und gestörtes Essverhalten einhergeht. Im Rahmen der ersten Publikation zeigte sich, dass ein höherer Gewichtsstatus mit einer höheren späteren Gewichtsstigmatisierung einhergeht und umgekehrt die Gewichtsstigmatisierung auch den späteren Gewichtsstatus prädiziert. Die zweite Publikation identifizierte Gewichtsstatus, gewichtsbezogene Hänseleien, depressive Symptome, Körperunzufriedenheit, Relevanz der eigenen Figur sowie das weibliche Geschlecht und einen niedrigeren Bildungsabschluss der Eltern als Prädiktoren des internalisierten Gewichtsstigmas. Die dritte Publikation verdeutlichte, dass das internalisierte Gewichtsstigma bereits ab einem geringen Ausmaß mit einem erhöhten Risiko für gestörtes Essverhalten einhergeht und mit weiteren psychosozialen Problemen assoziiert ist. Insgesamt zeigte sich, dass sowohl das erlebte als auch das internalisierte Gewichtsstigma bei Kindern und Jugendlichen über alle Gewichtsgruppen hinweg relevante Konstrukte sind, die im Entwicklungsverlauf ein komplexes Gefüge bilden. Es wurde deutlich, dass es essentiell ist, bidirektionale Wirkmechanismen einzubeziehen. Die vorliegende Dissertation liefert erste Ansatzpunkte für die Gestaltung von Präventions- und Interventionsmaßnahmen, um ungünstige Entwicklungsverläufe in Folge von Gewichtsstigmatisierung und internalisiertem Gewichtsstigma zu verhindern.
Inequalities in health are a prevalent feature of societies. And as societies, we condemn inequalities that are rooted in immutable circumstances such as gender, race, and parental background. Consequently, policy makers are interested in measuring and understanding the causes of health inequalities rooted in circumstances. However, identifying causal estimates of these relationships is very ambitious for reasons such as the presence of confounders or measurement error in the data. This thesis contributes to this ambitious endeavour by addressing these challenges in four chapters.
In the first Chapter, I use 25 years of rich health information to describe three features of intergenerational health mobility in Germany. First, we describe the joint permanent health distribution of the parents and their children. A ten percentile increase in parental permanent health is associated with a 2.3 percentile increase in their child’s health. Second, a percentile point increase in permanent health ranks is associated with a 0.8% to 1.4% increase in permanent income for, both, children, and parents, respectively. Non-linearities in the association between permanent health and income create incentives to escape the bottom of the permanent health distribution. Third, upward mobility in permanent health varies with parental socio-economic status.
In the second Chapter, we estimate the effect of maternal schooling on children’s mental health in adulthood. Using the Socio-Economic Panel and the mental health measure based on the SF-12 questionnaire, we exploit a compulsory schooling law reform to identify the causal effect of maternal schooling on children’s mental health. While the theoretical considerations are not clear, we do not find that the mother’s schooling has an effect on the mental health of the children. However, we find a positive effect on children’s physical health operating mainly through physical functioning. In addition, albeit with the absence of a reduced-form effect on mental health, we find evidence that the number of friends moderates the relationship between maternal schooling and their children’s mental health.
In the third Chapter, against a background of increasing violence against non-natives, we estimate the effect of hate crime on refugees’ mental health in Germany. For this purpose, we combine two datasets: administrative records on xenophobic crime against refugee shelters by the Federal Criminal Office and the IAB-BAMF-SOEP Survey of Refugees. We apply a regression discontinuity design in time to estimate the effect of interest. Our results indicate that hate crime has a substantial negative effect on several mental health indicators, including the Mental Component Summary score and the Patient Health Questionnaire-4 score. The effects are stronger for refugees with closer geographic proximity to the focal hate crime and refugees with low country-specific human capital. While the estimated effect is only transitory, we argue that negative mental health shocks during the critical period after arrival have important long-term consequences.
In the last Chapter of this thesis, we investigate how the economic consequences of the pandemic and the government-mandated measures to contain its spread affect the self-employed – particularly women– in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are 35% more likely to experience income losses than their male counterparts. We do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, e.g., the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
Anerkennung und Macht
(2021)
In der vorliegenden Untersuchung habe ich das Ziel verfolgt, einen sachlich-eigenständigen Beitrag für eine Debatte gegen Honneths kritische Gesellschaftstheorie zu leisten. In dieser Debatte wird Honneth dahingehend kritisiert, dass es ihm mit seiner kritischen Gesellschaftstheorie entgegen seiner eigenen systematischen Zielsetzung nicht gelingt, in modernen liberaldemokratischen Gesellschaften sämtliche Phänomene von sozialer Herrschaft kritisch zu hinterfragen. Denn soziale Anerkennung, die Honneth als Schlüsselbegriff für diese kritische Hinterfragung behandelt, bei der soziale Herrschaft in Verbindung mit sozialer Missachtung (als mangelnde soziale Anerkennung) steht, kann laut der Kritik faktisch selbst ein Medium für die Stiftung von sozialer Unterwerfung sein. Dies geschieht in Prozessen von Identitätsentwicklung, in denen soziale Anerkennung für Individuen als Anerkannte bestimmte Identitätsmöglichkeiten einräumt und auf diese Weise gleichzeitig andere Identitätsmöglichkeiten ausschließt, womit sie auf diese Identität einschränkend und insofern herrschend wirkt. Es handelt sich um eine Form von sozialer Herrschaft, die durch soziale Anerkennung gestiftet wird. Honneth zieht dem Vorwurf zufolge nicht in Erwägung, dass soziale Anerkennung bei Individuen als Anerkannte einen solchen negativen Effekt erzielen kann. Hieraus ergeben sich die Fragen, ob soziale Anerkennung in Prozessen von Identitätsentwicklung jeweils mit sozialer Herrschaft einhergeht und wie dieser Typus von sozialer Herrschaft kritisiert werden kann. Diese Fragen hat Honneth zuletzt in einem persönlichen Gespräch mit Allen und Cooke (als zwei Teilnehmerinnen der Debatte gegen Honneth) beantwortet. An dieser Stelle vertritt er mit beiden Gesprächsteilnehmerinnen die Auffassung, dass die Operation der Einschränkung von Identitätsmöglichkeiten an sich keine Operation darstellt, welche, wie sonst in der Debatte gegen seine kritische Gesellschaftstheorie behauptet wird, auf soziale Herrschaft zurückführt. Diese Auffassung beruht auf der Idee, wonach soziale Anerkennung sich in jenem praktischen Kontext nur unter der Bedingung als herrschaftsstiftend erweist, dass sie immanente Prinzipien verletzt, die substanziell kritische Maßstäbe definieren.
Mein Beitrag zu dieser Debatte gegen Honneth besteht auf der einen Seite in der Erklärung, dass sowohl jene Auffassung als auch jene Idee argumentativ mangelhaft sind, und auf der anderen Seite in der Ausführung des Vorhabens, diesen argumentativen Mangel selbst zu beheben. Gegen jene Auffassung behaupte ich, dass die drei Autoren in ihrem Gespräch nicht erläutern, inwiefern soziale Anerkennung nicht herrschend wirkt, wenn sie die Identitätsmöglichkeiten von Individuen als Anerkannte einschränkt, denn mit dieser Einschränkung wird vielmehr faktisch über diese Individuen geherrscht – die Debatte gegen Honneth, so zur Unterstützung dieser Ansicht, baut hauptsächlich auf ebendiesem Faktum auf. Gegen jene Idee habe ich fünf problematische Fragen gestellt und beantwortet, die Bezug eigentlich nicht allein auf diese Idee selbst, sondern überdies auf weitere, naheliegende Ideen nehmen, welche die drei Autoren angesprochen haben.
The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Previous behavioral studies showed that perceptual changes in infancy can be observed in multiple patterns, namely decline (e.g., Mattock et al., 2008; Yeung et al., 2013), maintenance (e.g., Chen & Kager, 2016) and U-shaped development (Liu & Kager, 2014).
This dissertation contributes further to the understanding of the developmental trajectory of phonological acquisition in infancy. The dissertation addresses the questions of how the perceptual sensitivity of lexical tones and vowels changes in infancy and how different experimental procedures contribute to our understanding. We used three experimental procedures to investigate German-learning infants’ discrimination abilities. In Studies 1 and 3 (Chapters 5 and 7) we used behavioral methods (habituation and familiarization procedures) and in Study 2 (Chapter 6) we measured neural correlates.
Study 1 showed a U-shaped developmental pattern: 6- and 18-month-olds discriminated a lexical tone contrast, but not the 9-month-olds. In addition, we found an effect of experimental procedure: infants discriminated the tone contrast at 6 months in a habituation but not in a familiarization procedure. In Study 2, we observed mismatch responses (MMR) to a non-native tone contrast and a native-like vowel in 6- and 9-month-olds. In 6-month-olds, both contrasts elicited positive MMRs. At 9 months, the vowel contrast elicited an adult-like negative MMR, while the tone contrast elicited a positive MMR. Study 3 demonstrated a change in perceptual sensitivity to a vowel contrast between 6 and 9 months. In contrast to the 6-month-old infants, the 9-month-old infants discriminated the tested vowel contrast asymmetrically.
We suggest that the shifts in perceptual sensitivity between 6 and 9 months are functional rather than perceptual. In the case of lexical tone discrimination, infants may have already learned by 9 months of age that pitch is not relevant at the lexical level in German, since the infants in Study 1 showed no perceptual sensitivity to the contrast tested. Nevertheless, the brain responded to the contrast, especially since pitch differences are also part of the German intonation system (Gussenhoven, 2004). The role of the intonation system in pitch discrimination could be supported by the recovery of behavioral discrimination at 18 months of age, as well as behavioral and neural discrimination in German-speaking adults.
History of Forgetfulness
(2021)
Einleitung
Ältere Patienten mit Herzklappenerkrankungen werden zunehmend häufig mit der kathetergestützten Aortenklappenimplantation (Transcatheter Aortic Valve Implantation, TAVI) oder dem MitraClip®-Verfahren behandelt. In der kardiologischen Rehabilitation nimmt infolgedessen die Patientenpopulation der Hochbetagten stetig zu. Die funktionale Gesundheit dieser Patienten wird durch häufig auftretende, sogenannte geriatrische Syndrome wie Multimorbidität, Mangelernährung, Gebrechlichkeit oder Sturzereignisse beeinflusst. Insbesondere die eingeschränkte Mobilität und Mangelernährung sind wichtige Prädiktoren für die Prognose der Patienten nach TAVI.
Etablierte Verfahren, um die körperliche Leistungsfähigkeit von kardiologischen Rehabilitanden zu beurteilen, sind die Belastungsergometrie und der 6-Minuten-Gehtest. Allerdings ist nahezu die Hälfte der hochbetagten Patienten nicht in der Lage, eine Belastungsergometrie durchzuführen. Bislang erfolgt in der kardiologischen Rehabilitation keine differenzierte Erfassung des funktionellen Status hinsichtlich Mobilität, Kraft und Gleichgewicht, um die geriatrischen Syndrome individuell zu beurteilen. Darüber hinaus werden keine Assessments zur Erfassung des Ernährungsstatus eingesetzt.
Daher war es das Ziel der vorliegenden Arbeit, die Ausprägung des funktionellen und nutritiven Status älterer Patienten anhand geeigneter Assessments in der kardiologischen Rehabilitation zu ermitteln.
Methode
Zwischen Oktober 2018 und Juni 2019 nahmen Patienten im Alter von 75 Jahren oder älter nach TAVI, atrioventrikulärer Intervention mittels MitraClip®-Verfahren (AVI) oder perkutaner Koronarintervention (PCI) an der Studie teil. Zu Beginn der kardiologischen Rehabilitation wurden soziodemografische Daten, echokardiografische Parameter (z. B. links und rechtsventrikuläre Ejektionsfraktion, Herzrhythmus) und Komorbiditäten (z. B. Diabetes mellitus, Niereninsuffizienz, orthopädische Erkrankungen) erhoben, um die Patientenpopulation zu beschreiben. Zusätzlich wurde die Gebrechlichkeit der Rehabilitanden mit dem Index von Stortecky et al., bestehend aus den Komponenten Kognition, Mobilität, Ernährung und Aktivitäten des täglichen Lebens, beurteilt.
Der 6-Minuten-Gehtest diente zur Ermittlung der körperlichen Leistungsfähigkeit der Patienten. Die Mobilität wurde mit Hilfe des Timed-Up-and-Go-Tests, die Ganggeschwindigkeit mit dem Gait Speed Test und die Handkraft mit dem Hand Grip Test erfasst.
Für die Objektivierung des Gleichgewichts wurde eine Kraftmessplatte (uni- und bipedaler Stand mit geöffneten und geschlossenen Augen) erprobt, die bislang bei älteren Rehabilitanden noch nicht eingesetzt wurde.
Der Ernährungsstatus wurde mit dem Mini Nutritional Assessment-Short Form und den ernährungsbezogenen Laborparametern (Hämoglobin, Serumalbumin, Eiweißkonzentration) erfasst.
Die Eignung der Assessments bewerteten wir anhand folgender Kriterien: Durchführbarkeit (bei ≥ 95 % der Patienten durchführbar), Sicherheit (< 95 % Stürze oder andere unerwünschte Ereignisse) und der Pearson-Korrelationen zwischen den funktionellen Tests und dem Goldstandard 6-Minuten-Gehtest sowie den Laborparametern und dem Mini Nutritional Assessment-Short Form.
Ergebnisse
Es wurden 124 Patienten (82 ± 4 Jahre, 48 % Frauen, 5 ± 2 Komorbiditäten, 9 ± 3 Medikamente) nach TAVI (n = 59), AVI (n = 21) und PCI (n = 44) konsekutiv in die Studie eingeschlossen.
Etwa zwei Drittel aller Patienten der Gesamtpopulation waren als gebrechlich zu klassifizieren, bei einer mittleren Punktzahl von 2,9 ± 1,4. Annähernd die Hälfte der Patienten zeigte eine eingeschränkte körperliche Leistungsfähigkeit aufgrund einer reduzierten 6-Minuten-Gehstrecke (48 % < 350 m) sowie eine eingeschränkte Mobilität im Timed-Up-and-Go-Test (55 % > 10 s). Es wurden eine mittlere Gehstrecke von 339 ± 131 m und eine durchschnittliche Zeit im Timed-Up-and-Go-Test von 11,4 ± 6,3 s erzielt. Darüber hinaus wies ein Viertel der Patienten eine eingeschränkte Ganggeschwindigkeit (< 0,8 m/s) auf und etwa 35 % von Ihnen zeigten eine reduzierte Handkraft (Frauen/Männer < 16/27 kg). Im Mittel wurde eine Geschwindigkeit von 1,0 ± 0,2 m/s im Gait Speed Test sowie eine Handkraft von 24 ± 9 kg im Hand Grip Test erreicht. Ein Risiko einer Mangelernährung konnte bei 38 % (< 12 Punkte) der Patienten nachgewiesen werden bei einer mittleren Punktzahl von 11,8 ± 2,2 im Mini Nutritional Assessment-Short Form.
Im Vergleich zwischen den einzelnen Subpopulationen bestanden keine statistisch signifikanten Unterschiede in den Ergebnissen der funktionellen Assessments. Bezüglich des Ernährungsstatus wiesen allerdings die Patienten nach AVI einen statistisch signifikant niedrigeren Punktewert im Mini Nutritional Assessment-Short Form (10,3 ± 3,0 Punkte) auf als die Patienten nach TAVI (12,0 ± 1,8 Punkte) und PCI (12,1 ± 2,1 Punkte), wobei etwa 57 % der Patienten nach AVI, 38 % nach TAVI und 50 % nach PCI ein Risiko einer Mangelernährung zeigten.
Mit Ausnahme der Tests auf der Kraftmessplatte waren alle Assessments durchführbar und sicher. Während 86 % der Patienten den bipedalen Stand mit geschlossenen Augen auf der Kraftmessplatte durchführen konnten und damit nahezu den Grenzwert von 95 % erreichten, war der unipedale Stand mit 12 % an durchführbaren Messungen weit von diesem entfernt.
Der Gait Speed Test (r = 0,79), Timed-Up-and-Go-Test (r = 0,68) und Hand Grip Test (r = 0,33) korrelierten signifikant mit dem 6-Minuten-Gehtest, Hämoglobin (r = 0,20) und Albumin (r = 0,24) korrelierten mit dem Mini Nutritional Assessment-Short Form.
Schlussfolgerung
Über die bestehende Multimorbidität und Multimedikation hinaus wiesen die untersuchten Patienten vor allem eine eingeschränkte Mobilität und ein Risiko einer Mangelernährung auf, wobei die Subpopulation nach AVI besonders betroffen war.
Um den Bedürfnissen hochbetagter Rehabilitanden nach kathetergestützer Intervention gerecht zu werden, ist eine individuelle Behandlung der einzelnen Defizite erforderlich, mit besonderer Berücksichtigung der Komorbiditäten sowie der geriatrischen Kofaktoren. Aufgrund des multidisziplinären Ansatzes erfüllt die kardiologische Rehabilitation bereits die Voraussetzung, hochbetagte Patienten bedarfsgerecht zu behandeln, jedoch mangelt es an Assessments, um die individuellen Defizite der Patienten zu identifizieren
Der Gait Speed Test, der Timed-Up-and-Go-Test und der Hand Grip Test sollten daher in den klinischen Alltag der kardiologischen Rehabilitation implementiert werden, um die körperliche Funktion und Leistungsfähigkeit älterer Patienten detailliert zu beurteilen. In Kombination dieser Assessments mit dem Mini Nutritional Assessment-Short Form können die individuellen funktionellen und nutritiven Bedürfnisse der Patienten während der Rehabilitation erkannt und mit geeigneten Maßnahmen die weitere Ausbildung geriatrischer Syndrome gemindert werden.
Rehabilitationspädagogik
(2021)
Die Rehabilitationspädagogik ist eine jüngere eigenständige Hybridwissenschaft im Feld der Humanwissenschaften. Sie setzt theoriebildend im Sinne des Neunten Buchs Sozialgesetzbuch (SGB IX) an den längerfristigen Folgen einer Krankheit oder eines biologischen Mangels an. Dabei orientiert sie sich konzeptionell zum Beispiel an der UN-Behindertenrechtskonvention (UN-BRK) und an der International Classification of Functioning, Disability and Health (ICF). Des Weiteren an den Konzepten der Humanontogenetik von K.-F. Wessel, insbesondere: dem ganzen Menschen, der Hierarchie der Kompetenzen, den sensiblen Phasen und der Souveränität.
Die Rehabilitationspädagogik ist Bestandteil der komplexen gesundheitlichen Rehabilitation und eine Tochterdisziplin der allgemeinen Pädagogik. Bei ihrem rehabilitationspädagogischen Prozess gilt das Richtziel, die umfassende Teilhabe des Menschen an individuellen Lebensbereichen durch rehabilitationspädagogische Mittel, Methoden und Organisationsformen zu unterstützen.
Die Dissertation setzt sich mittels Methoden der Hermeneutik mit der DDR-Rehabilitationspädagogik von K.- P. Becker und Autorenkollektiv kritisch-konstruktiv auseinander. Sie legt eine aktuelle fortführende Theorie der Rehabilitationspädagogik unter der Berücksichtigung der UN-BRK, der ICF und des SGB IX vor und liefert eine neue Sichtweise auf die Rehabilitationspädagogik aus historischer und aktueller Perspektive.
Iron-sulfur clusters are essential enzyme cofactors. The most common and stable clusters are [2Fe-2S] and [4Fe-4S] that are found in nature. They are involved in crucial biological processes like respiration, gene regulation, protein translation, replication and DNA repair in prokaryotes and eukaryotes. In Escherichia coli, Fe-S clusters are essential for molybdenum cofactor (Moco) biosynthesis, which is a ubiquitous and highly conserved pathway. The first step of Moco biosynthesis is catalyzed by the MoaA protein to produce cyclic pyranopterin monophosphate (cPMP) from 5’GTP. MoaA is a [4Fe-4S] cluster containing radical S-adenosyl-L-methionine (SAM) enzyme. The focus of this study was to investigate Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions using E. coli as a model organism. Nitrate and TMAO respiration usually occur under anaerobic conditions, where oxygen is depleted. Under these conditions, E. coli uses nitrate and TMAO as terminal electron. Previous studies revealed that Fe-S cluster insertion is performed by Fe-S cluster carrier proteins. In E. coli, these proteins are known as A-type carrier proteins (ATC) by phylogenomic and genetic studies. So far, three of them have been characterized in detail in E. coli, namely IscA, SufA, and ErpA. This study shows that ErpA and IscA are involved in Fe-S cluster insertion into MoaA under nitrate and TMAO respiratory conditions. ErpA and IscA can partially replace each other in their role to provide [4Fe-4S] clusters for MoaA. SufA is not able to replace the functions of IscA or ErpA under nitrate respiratory conditions.
Nitrate reductase is a molybdoenzyme that coordinates Moco and Fe-S clusters. Under nitrate respiratory conditions, the expression of nitrate reductase is significantly increased in E. coli. Nitrate reductase is encoded in narGHJI genes, the expression of which is regulated by the transcriptional regulator, fumarate and nitrate reduction (FNR). The activation of FNR under conditions of nitrate respiration requires one [4Fe-4S] cluster. In this part of the study, we analyzed the insertion of Fe-S cluster into FNR for the expression of narGHJI genes in E. coli. The results indicate that ErpA is essential for the FNR-dependent expression of the narGHJI genes, a role that can be replaced partially by IscA and SufA when they are produced sufficiently under the conditions tested. This observation suggests that ErpA is indirectly regulating nitrate reductase expression via inserting Fe-S clusters into FNR.
Most molybdoenzymes are complex multi-subunit and multi-cofactor-containing enzymes that coordinate Fe-S clusters, which are functioning as electron transfer chains for catalysis. In E. coli, periplasmic aldehyde oxidoreductase (PaoAC) is a heterotrimeric molybdoenzyme that
consists of flavin, two [2Fe-2S], one [4Fe-4S] cluster and Moco. In the last part of this study, we investigated the insertion of Fe-S clusters into E. coli periplasmic aldehyde oxidoreductase (PaoAC). The results show that SufA and ErpA are involved in inserting [4Fe-4S] and [2Fe-2S] clusters into PaoABC, respectively under aerobic respiratory conditions.
The optical properties of chromophores, especially organic dyes and optically active inorganic molecules, are determined by their chemical structures, surrounding media, and excited state behaviors. The classical optical go-to techniques for spectroscopic investigations are absorption and luminescence spectroscopy. While both techniques are powerful and easy to apply spectroscopic methods, the limited time resolution of luminescence spectroscopy and its reliance on luminescent properties can make its application, in certain cases, complex, or even impossible. This can be the case when the investigated molecules do not luminesce anymore due to quenching effects, or when they were never luminescent in the first place. In those cases, transient absorption spectroscopy is an excellent and much more sophisticated technique to investigate such systems. This pump-probe laser-spectroscopic method is excellent for mechanistic investigations of luminescence quenching phenomena and photoreactions. This is due to its extremely high time resolution in the femto- and picosecond ranges, where many intermediate or transient species of a reaction can be identified and their kinetic evolution can be observed. Furthermore, it does not rely on the samples being luminescent, due to the active sample probing after excitation. In this work it is shown, that with transient absorption spectroscopy it was possible to identify the luminescence quenching mechanisms and thus luminescence quantum yield losses of the organic dye classes O4-DBD, S4-DBD, and pyridylanthracenes. Hence, the population of their triplet states could be identified as the competitive mechanism to their luminescence. While the good luminophores O4-DBD showed minor losses, the S4-DBD dye luminescence was almost entirely quenched by this process. However, for pyridylanthracenes, this phenomenon is present in both the protonated and unprotonated forms and moderately effects the luminescence quantum yield. Also, the majority of the quenching losses in the protonated forms are caused by additional non-radiative processes introduced by the protonation of the pyridyl rings. Furthermore, transient absorption spectroscopy can be applied to investigate the quenching mechanisms of uranyl(VI) luminescence by chloride and bromide. The reduction of the halides by excited uranyl(VI) leads to the formation of dihalide radicals X^(·−2). This excited state redox process is thus identified as the quenching mechanism for both halides, and this process, being diffusion-limited, can be suppressed by cryogenically freezing the samples or by observing these interactions in media with a lower dielectric constant, such as ACN and acetone.
The mitochondrial chaperone complex HSP60/HSP10 facilitates mitochondrial protein homeostasis by folding more than 300 mitochondrial matrix proteins. It has been shown previously that HSP60 is downregulated in brains of type 2 diabetic (T2D) mice and patients,
causing mitochondrial dysfunction and insulin resistance. As HSP60 is also decreased in peripheral tissues in T2D animals, this thesis investigated the effect of overall reduced HSP60 in the development of obesity and associated co-morbidities.
To this end, both female and male C57Bl/6N control (i.e. without further alterations in their genome, Ctrl) and heterozygous whole-body Hsp60 knock-out (Hsp60+/-) mice, which exhibit a 50 % reduction of HSP60 in all tissues, were fed a normal chow diet (NCD) or a highfat diet (HFD, 60 % calories from fat) for 16 weeks and were subjected to extensive metabolic phenotyping including indirect calorimetry, NMR spectroscopy, insulin, glucose and pyruvate tolerance tests, vena cava insulin injections, as well as histological and molecular analysis.
Interestingly, NCD feeding did not result in any striking phenotype, only a mild increase in energy expenditure in Hsp60+/- mice. Exposing mice to a HFD however revealed an increased body weight due to higher muscle mass in female Hsp60+/- mice, with a simultaneous decrease in energy expenditure. Additionally, these mice displayed decreased fasting glycemia. Opposingly, male Hsp60+/- compared to control mice showed lower body weight gain due to decreased fat mass and an increased energy expenditure, strikingly independent of lean mass. Further, only male Hsp60+/- mice display improved HOMA-IR and Matsuda
insulin sensitivity indices.
Despite the opposite phenotype in regards to body weight development, Hsp60+/- mice of both sexes show a significantly higher cell number, as well as a reduction in adipocyte size in the subcutaneous and gonadal white adipose tissue (sc/gWAT). Curiously, this adipocyte hyperplasia – usually associated with positive aspects of WAT function – is disconnected from metabolic improvements, as the gWAT of male Hsp60+/- mice shows mitochondrial dysfunction, oxidative stress, and insulin resistance. Transcriptomic analysis of gWAT shows an up
regulation of genes involved in macroautophagy. Confirmatory, expression of microtubuleassociated protein 1A/1B light chain 3B (LC3), as a protein marker of autophagy, and direct measurement of lysosomal activity is increased in the gWAT of male Hsp60+/- mice.
In summary, this thesis revealed a novel gene-nutrient interaction. The reduction of the crucial chaperone HSP60 did not have large effects in mice fed a NCD, but impacted metabolism during DIO in a sex-specific manner, where, despite opposing body weight and
body composition phenotypes, both female and male Hsp60+/- mice show signs of protection from high fat diet-induced systemic insulin resistance.
Negotiations between buyers and suppliers directly influence a company’s costs, revenue, and consequently its profits. The outcome of these negotiations relies heavily on the companies’ as well as the negotiators’ power position. Across three empirical articles the author demonstrates how the own power position can first be identified as well as improved and subsequently used to maximize profits in negotiations between sellers and buyers. In the first paper the sources underlying buyer and supplier power are identified and weighted. The results of the first paper show the impact of each single sources on the buyer and supplier power. The number of suppliers available for one product is by far the most important source of power for both sides. The results indicate that a higher number of suppliers leads to a better power position of the buyer and simultaneously to an inferior power position of a single supplier. The second paper aims to examine the impact of the number of suppliers on the outcome of buyer-seller-negotiations thereby considering the innovation level of the products purchased. The results of the second study which are based on real negotiation data from a German car manufacturer indicate that the number of available suppliers has a stronger impact on the negotiation outcome for innovative than for functional, less innovative products. The third paper analyzes how the ability to take the counterpart’s perspective (perspective taking ability) influences the negotiation outcome. This relationship is examined for different power positions. The results indicate that a negotiator’s high perspective taking ability leads to a more unfavorable negotiation outcome compared to low perspective taking ability. Simultaneously, high perspective taking ability causes a more positive perception of the conducted negotiation than low perspective taking ability. This contradictory effect of perspective taking ability bears the risk for buyers and suppliers to assess an unfavorable outcome as positive. Finally, the results of the papers are summarized and discussed. The dissertation concludes with implications for practice, limitations of the work, and approaches for future research.
The self-assembly of amphiphilic polymers in aqueous systems is important for a plethora of applications, in particular in the field of cosmetics and detergents. When introducing thermoresponsive blocks, the aggregation behavior of these polymers can be controlled by changing the temperature. While confined to simple diblock copolymer systems for long, the complexity - and thus the versatility - of such smart systems can be strongly enlarged, once designed monomers, specific block sizes, different architectures, or additional functional groups such as hydrophobic stickers are implemented. In this work, the structure-property relationship of such thermoresponsive amphiphilic block copolymers was investigated by varying their structure systematically. The block copolymers were generally composed of a permanently hydrophobic sticker group, a permanently hydrophilic block, and a thermoresponsive block exhibiting a Lower Critical Solution Temperature (LCST) behavior. While the hydrophilic block consisted of N,N dimethylacrylamide (DMAm), different monomers were used for the thermoresponsive block, such as N n propylacrylamide (NPAm), N iso propylacrylamide (NiPAm), N,N diethylacrylamide (DEAm), N,N bis(2 methoxyethyl)acrylamide (bMOEAm), or N acryloylpyrrolidine (NAP) with different reported LCSTs of 25, 32, 33, 42 and 56 °C, respectively. The block copolymers were synthesized by successive reversible addition fragmentation chain transfer (RAFT) polymerization. For the polymers with the basic linear, the twinned hydrophobic and the symmetrical quasi miktoarm architectures, the results were well defined block sizes and end groups as well as narrow molar mass distributions (Ɖ ≤ 1.3). More complex architectures, such as the twinned thermoresponsive and the non-symmetrical quasi miktoarm one, were achieved by combining RAFT polymerization with a second technique, namely atom transfer radical polymerization (ATRP) or single unit monomer insertion (SUMI), respectively. The obtained block copolymers showed well defined block sizes, but due to the complexity of these reaction paths, the dispersities were generally higher (Ɖ ≤ 1.8) and some end groups were lost.
The thermoresponsive behavior of the block copolymers was investigated by turbidimetry and dynamic light scattering (DLS). Below the phase transition temperature, the polymers were soluble in water and small micellar structures were visible. However, above the phase transition temperature, the aggregation behavior was strongly dependent on the architecture and the chemical structure of the thermoresponsive block. Thermoresponsive blocks comprising PNAP and PbMOEAm with DPn = 40 showed no cloud point (CP), since their already high LCSTs were further increased by the attached hydrophilic block. Depending on the architecture as well as on the block size, block copolymers with PNiPAm, PDEAm and PNPAm showed different CP’s. Large aggregates were visible for block copolymers with PNiPAm and PDEAm above their CP. For PNPAm containing block copolymers, the phase transition was very sensitive towards the architecture resulting in either small or large aggregates.
In addition, fluorescence studies were performed using PDMAm and PNiPAm homo and block copolymers with linear architecture, functionalized with complementary fluorescence dyes introduced at the opposite chain ends. The thermoresponsive behavior was studied in pure aqueous solution as well as in an oil in water (o/w) microemulsion. The findings indicate that the block copolymer behaves as polymeric surfactant at low temperatures, with one relatively small hydrophobic end group and an extended hydrophilic chain forming ‘hairy micelles’ similar as the other synthesized architectures. Above the phase transition temperature of the PNiPAm block, however, the copolymer behaves as associative telechelic polymer with two non-symmetrical hydrophobic end groups, which do not mix. Thus, instead of a network of bridged ‘flower micelles’, large dynamic aggregates are formed. These are connected alternatingly by the original micellar cores as well as by clusters of the collapsed PNiPAm blocks. This type of bridged micelles is even more favored in the o/w microemulsion than in pure aqueous solution.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Ausgangspunkt der Dissertation ist die Fragestellung, warum es relativ wenige weibliche Wirtschaftsprüfer/innen in Deutschland gibt. Laut Mitgliederstatistik der Wirtschaftsprüferkammer vom 1. Januar 2020 liegt der Frauenanteil im Berufs-stand bei rund 17 %. Einschlägige Literatur zeigt, dass auf Ebene der Berufseinstei-ger/innen im Segment der zehn größten Wirtschaftsprüfungsgesellschaften das Ge-schlechterverhältnis recht ausgewogen ist. Jedoch liegt der Frauenanteil auf der Hierarchieebene „Manager“, für die üblicherweise ein bestandenes Berufsexamen Voraussetzung ist, bereits deutlich niedriger und sinkt mit jeder weiteren Hierar-chiestufe. Die Zielstellung der Dissertation wurde somit dahingehend spezifiziert, diejenigen Faktoren zu analysieren, die dazu beitragen können, dass die relative Repräsentation von Frauen im Segment der zehn größten Wirtschaftsprüfungsge-sellschaften Deutschlands ab der Manager-Ebene (d. h. üblicherweise ab der Schwelle der examinierten Wirtschaftsprüfer/innen) sinkt. Der Fokus der Analyse liegt daher auf Ebene der erfahrenen Prüfungsassistenten und Prüfungsassistentin-nen (Senior), um diese Schwelle unmittelbar vor der Manager-Ebene detailliert zu beleuchten.
Neben der Auswertung von Erkenntnissen aus der internationalen Prüfungsfor-schung wurde eine empirische Studie unter den Senior von sechs der zehn größten Wirtschaftsprüfungsgesellschaften in Deutschland durchgeführt. Die empirischen Ergebnisse wurden mittels deskriptiver Datenanalyse ausgewertet und dahinge-hend analysiert, für welche der zuvor definierten Aspekte signifikante geschlechts-spezifische Unterschiede zu beobachten sind. Für ausgewählte Aspekte wurde zu-dem analysiert, ob es Unterschiede zwischen weiblichen/männlichen Senior mit Kind/ern und ohne Kind/er gibt. Insgesamt wurden für zahlreiche Aspekte ge-schlechtsspezifische Unterschiede und Unterschiede zwischen Senior mit Kind/ern und ohne Kind/er gefunden. Es zeigt sich außerdem, dass neben der beruflichen Situation auch die individuellen Eigenschaften und das private Umfeld von Bedeu-tung sind. Im Rahmen der beruflichen Situation spielen sowohl die Wahrnehmung der aktuellen beruflichen Situation eine Rolle als auch u. a. die Erwartungen der Senior an die mögliche künftige Manager-Position, an das Wirtschaftsprüfungsexa-men und an weitere berufliche Perspektiven.
Foresight in networks
(2021)
The goal of this dissertation is to contribute to the corporate foresight research field by investigating capabilities, practices, and challenges particularly in the context of interorganizational settings and networked organizations informed by the theoretical perspectives of the relational view and dynamic capabilities.
Firms are facing an increasingly complex environment and highly complex product and service landscapes that often require multiple organizations to collaborate for innovation and offerings. Public-private partnerships that are targeted at supporting this have been introduced by policy-makers in the recent past. One example for such a partnership is the European Institute of Innovation and Technology (EIT) with multiple Knowledge and Innovation Communities (KICs). The EIT has been initiated by the European Commission in 2008 with the ambition of addressing grand societal challenges, driving innovativeness of European companies, and supporting systemic change. The resulting network organizations are managed similarly to corporations with managers, boards, and firm-like governance structures. EIT Digital as one of the EIT KICs are a central case of this work.
Research in this dissertation was based on the expectation that corporate foresight activities will increasingly be embedded in such interorganizational settings and a) can draw on such settings for the benefit of themselves and b) may contribute to shared visions, trust building and planning in these network organizations. In this dissertation the EIT Digital (formerly EIT ICT Labs) is a central case, supplemented with insights from three additional cases. I draw on the rich theoretical understanding of the resource-based view, dynamic capabilities, and particularly the relational view to further the discussion in the field of corporate foresight—defined as foresight in organizations in contrast to foresight with a macro-economical perspective—towards a relational understanding. Further, I use and revisit Rohrbeck’s Maturity Model for the Future Orientation of Firms as conceptual frame for corporate foresight in interorganizational settings. The analyses—available as four individual publications complemented by on additional chapter—are designed as exploratory case studies based on multiple data sources including an interview series with 49 persons, two surveys (N=54, n=20), three supplementary interviews, access to key documents and presentations, and observation through participation in meetings and activities of the EIT Digital. This research setting allowed me to contribute to corporate foresight research and practice by 1) integrating relational constructs primarily drawn from the relational view and dynamic capabilities research into the corporate foresight research stream, 2) exploring and understanding capabilities that are required for corporate foresight in interorganizational and networked organizations, 3) discussing and extending the Maturity Model for network organizations, and 4) to support individual organizations to tie their foresight systems effectively to networked foresight systems.
Die Dissertation geht der grundlegenden Forschungsfrage nach, wie die Liberal-Demokratische Partei Deutschlands (LDPD) auf lokaler Ebene die ihr zugeschriebene Rolle im politischen Alltag ausfüllte, in welchem Verhältnis sie zum System der DDR stand sowie welche Handlungsspielräume bestanden und genutzt wurden. Ihre Parteiarbeit vor Ort vom Mauerbau bis in die 1980er Jahre hinein blieb von der Forschung bisher weitgehend unbeobachtet, da das Interesse verstärkt der herrschenden SED oder den rebellischen Ansätzen der LDPD in den 1940er und späten 1980er Jahren galt. Die vorliegende Arbeit hat einen ersten Schritt unternommen, die liberale Partei auf Kreis- und Ortsebene zu untersuchen, und trägt dazu bei, diese Lücken zu schließen. Anhand der Fallbeispiele Gotha, Erfurt-Stadt und Eisenach beleuchtet die Dissertation die interne Parteiorganisation, Verhalten und Motivationen der Mitglieder sowie unter Berücksichtigung netzwerktheoretischer Ansätze die Verflechtungen der lokalen Parteifunktionsträger, die sich in die kommunale Arbeit vor Ort einmischten. Informations- und Situationsberichte sowie Korrespondenzen und Organisationsunterlagen gaben Auskunft über Selbstbilder, Aktivität, Themen und Kommunikationsaspekte. Deutlich werden die strengen Kontrollmechanismen innerhalb der Partei sowie das Spannungsfeld zwischen einer klaren Unterstützung der SED-Politik und individuell eigen-sinnigem Verhalten.
Durch die Analysekategorie des „Eigen-Sinns“ als Form der vielschichtigen Aneig- nung von Herrschaftsstrukturen in Abgrenzung zu den Begriffen Opposition und Widerstand wird gezeigt, dass die LDPD-Mitglieder in den untersuchten Kreisen sich zwar Freiheiten der Kritikäußerung nahmen sowie weitgehend selbstständig den Grad ihrer Aktivität bestimmten, dabei die grundlegenden Systemfragen jedoch nicht berührten. Es existierten viele unterschiedliche Lebenswelten der Akteure, abhängig von Tätigkeitsfeld, Motivation und Umfeld, die zu verschiedenen Taktiken und Ausprägungen des Eigen-Sinns bei einfachen Mitgliedern und den lokalen Funktionsträgern führten. Durch ihre kommunale Mitarbeit jedoch kümmerten sich die Liberaldemokraten in den Gemeinden um die drängendsten Versorgungsprobleme und sorgten mit der aktiven Rekrutierung ihrer Mitglieder für Arbeitsprogramme und Wettbewerbe für eine Beteiligung der LDPD an der Beseitigung der schlimmsten Mängel im öffentlichen Raum. Damit leisteten sie einen Beitrag zur Dämpfung der allgemeinen Unzufriedenheit und stärkten mittelbar das DDR-System. Im Gegenzug erhielten sie dafür von der SED eingeschränkte und klar definierte Handlungsspielräume. Mittels der beruflichen Verankerung der meisten aktiven Liberaldemokraten im ökonomischen Bereich konnte viel Praxiswissen herausgebildet werden, mit dem sich die untersuchten LDPD-Verbände im Rahmen der gewährten Gestaltungsfreiheit durchaus selbstbewusst in kommunale Prozesse einmischten. Für die Stabilisierung des Systems über die lange Zeit zwischen Mauerbau und Mauerfall spielten sie damit eine wichtige Rolle.
Die Vermischung von Distanzierung, Akzeptanz, Widerspruch und Gehorsam machen die Parteibasis und auch die aktiven Parteifunktionsträger auf der unteren Ebene zu einem sehr spannenden Untersuchungsfeld, das auch noch längst nicht ausgeschöpft ist.
Die Dissertation legt ihren Schwerpunkt auf die synchronische und diachronische Variation im Gebrauch der französischen Kausalkonjunktion parce que sowie auf die Interaktion mit den außersprachlichen Variablen Alter und sozioprofessionelle Kategorie. Basierend auf vorausgehenden makrodiachronischen Studien, die Anhaltspunkte dafür liefern, dass die Konjunktion einen Prozess der Pragmatikalisierung durchlaufen hat und weiterhin durchläuft, wurde ein Untersuchungskorpus von 56 Interviews aus den diachronisch distinkten Korpora ESLO1, ESLO2 und LangAge extrahiert. Dieses Untersuchungskorpus diente als Grundlage für Panelstudien und Trendstudien, die darauf ausgerichtet waren, die Pragmatikalisierung von parce que aus einem mikrodiachronischen Gesichtspunkt zu verifizieren. Zusätzlich zu der diachronischen Perspektive wurde eine synchronische Perspektive eingenommen, um die Variation im Gebrauch der Konjunktion so einem diachronischen Phänomen wie dem age grading oder der apparent time zuordnen zu können. Ausgehend von der Theorie der Konstruktionsgrammatik wurden parce que enthaltende Konstruktionen bottom-up annotiert und in fünf Pragmatikalitätsgrade kategorisiert (pra0–pra4). Diese wurden anschließend quantifiziert und in Abhängigkeit des Geburtsjahres und der sozioprofessionellen Kategorie der (männlichen) Sprecher mithilfe mehrerer R-Modelle wie ctrees, trees, lm, hclust und kmeans analysiert.
Die Frequenzentwicklung der Pragmatikalitätsgrade bestätigte die Pragmatikalisierungshypothese in einem mikrodiachronischen Rahmen. Zudem konnte ein quantitativer Rückgang im Gebrauch der Konstruktionen am nicht- oder weniger pragmatikalisierten (pra0, pra1) Pol festgestellt werden, während Verwendungsweisen höherer Pragmatikalisierungsgrade (pra2–pra4) über 40 Jahre vergleichsweise stabil blieben.
Obwohl für pra2 kein signifikanter Wandel hervortrat, wies dessen Entwicklung bei den Sprechern im mittleren Lebensalter sowie das synchronische Muster in Abhängigkeit von Alter (oder Geburtsjahr) und von sozioprofessioneller Kategorie dennoch in Richtung einer zugrundeliegenden diachronischen Variation. Diese könnte als ein durch die sozialen Transformationen der 1960er und 1970er Jahre katalysiertes Phänomen des age grading interpretiert werden. Für die näher am pragmatischen Pol situierten Gebrauchsweisen (pra3 und pra4) konnte keine klare Tendenz ermittelt werden.
Die Ergebnisse fordern diachronische Konzepte wie age grading und apparent time heraus, indem sie die Simplizität der zugrundeliegenden Mechanismen sowie die gängigen Methoden, diese zu identifizieren, infrage stellen.
Fibroblast growth differentiation factor 21 (FGF21) is known as a pivotal regulator of the glucose and lipid metabolism. As such, it is considered beneficial and has even been labelled a longevity hormone. Nevertheless, recent observational studies have shown that FGF21 is increased in higher age with possible negative effects such as loss of lean and bone mass as well as decreased survival. Hepatic FGF21 secretion can be induced by various nutritional stimuli such as starvation, high carbohydrate and fat intake as well as protein deficiency.. So far it is still unclear whether the FGF21 response to different macronutrients is altered in older age. An altered response would potentially contribute to explain the higher FGF21 concentrations found in older age. In this publication-based doctoral dissertation, a cross-sectional study as well as a dietary challenge were conducted to investigate the influence of nutrition on FGF21 concentrations and response in older age. In a cross-sectional study, FGF21 concentrations were assessed in older patients with and without cachexia anorexia syndrome anorexia syndrome compared to an older community-dwelling control group. Cachexia anorexia syndrome is a multifactorial syndrome frequently occurring in old age or in the context of an underlying disease. It is characterized by a severe involuntary weight loss, loss of appetite (anorexia) and reduced food intake, therefore representing a state of severe nutrient deficiency, in some aspects similar to starvation. The highest FGF21 concentrations were found in patients with cachexia anorexia syndrome. Moreover, FGF21 was positively correlated with weight loss and loss of appetite. In addition, cachexia anorexia syndrome itself was associated with FGF21 independent of sex, age and body mass index. As cachectic patients presumably exhibit protein malnutrition and FGF21 has been proposed a marker for protein insufficiency, the higher levels of FGF21 in patients with cachexia anorexia syndrome might be partly explained by insufficient protein intake. In order to investigate the acute response of FGF21 to different nutritional stimuli, a dietary challenge with a parallel group design was conducted. Here, healthy older (65-85 years) and younger (18-35 years) adults were randomized to one of four test meals: a dextrose drink, a high carbohydrate, high fat or high protein meal. Over the course of four hours, postprandial FGF21 concentrations (dynamics) were assessed and the FGF21 response (incremental area under the curve) to each test meal was examined.. In a sub-group of older and younger women, also the adiponectin response was investigated, as adiponectin is a known mediator of FGF21 effects on glucose and lipid metabolism. The dietary meal challenge revealed that dextrose and high carbohydrate intake result in higher FGF21 concentrations after four hours in older adults. This was partly explained by higher postprandial glucose concentrations in the old. For high fat ingestion no age differences were found. For the first time, acute FGF21 response to high protein intake was shown. Here, protein ingestion resulted in lower FGF21 concentrations in younger compared to older adults. Furthermore, sufficient protein intake, according to age-dependent recommendations, of the previous day, was associated with lower FGF21 concentrations in both age groups. The higher FGF21 response to dextrose ingestion resulted in a higher adiponectin response in older women, independent of fat mass, insulin resistance, triglyceride concentrations, inflammation and oxidative stress. Following the high fat meal, adiponectin concentrations declined in older women. Adiponectin response was not affected by meal composition in younger women. In summary, this thesis showed a positive association of FGF21 and cachexia anorexia syndrome with concomitant anorexia in older patients. Regarding the acute FGF21 response, a higher response following dextrose and carbohydrate ingestion was found in older compared with younger subjects. This might be attributed to a higher glucose response in older age. Furthermore, it was shown that the higher FGF21 response after dextrose ingestion possibly contributes to a higher adiponectin response in older women, independent of potential metabolic and inflammatory confounders. Acute protein ingestion resulted in a significant decrease in FGF21 concentrations. Moreover, protein intake of the previous day was inversely associated with fasting FGF21 concentrations. This might explain why FGF21 concentrations are higher in cachexia anorexia syndrome. These results therefore support the role of FGF21 as a sensor of protein restriction.
Die vorliegende Arbeit beschäftigt sich mit der Synthese von Disulfiden, der Thiol-Disulfid Metathesereaktion als Möglichkeit, Polymere zu funktionalisieren, und der Synthese von Polydisulfiden. Im ersten Teil der Arbeit wird die Aminolyse von RAFT-Polymeren und die Abhängigkeit der Polymer-Polymer Disulfidbildung von der Molmasse untersucht. Dabei wurde durch die Aufnahme von Reaktionskinetiken mittels Gel-Permeations-Chromatographie (GPC) festgestellt, dass je länger die Polymerketten sind, desto weniger Disulfid Polymerkopplung tritt auf. RAFT-Polymere werden oft genutzt, um die RAFT-Polymer Endgruppe nach der Polymerisation zu modifizieren oder in einer chemischen Reaktion zu funktionalisieren. Hier kann die Aminolyse in Anwesenheit von kurzkettigen Disulfiden, wie zum Beispiel Cystin, durchgeführt werden, um die Bildung von Polymer-Polymer Disulfiden vollständig zu unterdrücken und ein endgruppenfunktionalisiertes Polymer zu erhalten. Bei dieser Reaktion greift das bei der Aminolyse entstehende Polymerthiolat die kurzkettigen Disulfide an, und es kommt zur Bildung von funktionalisierten Polymeren. Es wurde ein Polyethylenglykoldisulfid eingesetzt, um ein amphiphiles Blockcopolymer zu erhalten. Als RAFT-Polymer wurde Polystyrol (PS) verwendet, und es konnte die Bildung von Polystyrol-Polyethylenglykol Copolymeren nachgewiesen werden. Das amphiphile Polymer bildet im wässrigen Medium Vesikel. Die Oberfläche der Vesikel konnte mittels der Thiol-Disulfid Metathese umfunktionalisiert werden. Die Aminolyse von PS RAFT-Polymeren mit einem Polylaktiddisulfid oder einem Polybenzylglutamatdisulfid ergab Polystyrol-block-Polyester und Polystyrol-block-Polyaminosäuren Copolymere. Im zweiten Teil der Arbeit liegt der Fokus auf der Synthese von Polydisulfiden und ihren thermischen Eigenschaften. Es wurden verschiedene Alkyldithiole synthetisiert und mittels Wasserstoffperoxid und Triethylamin polymerisiert. Dabei konnte gezeigt werden, dass die Polymere teilkristallin sind und dass der Schmelzpunkt und die Kristallinität der Polymere mit steigender Alkylkettenlänge zwischen den Disulfidbindungen zunehmen. Die Möglichkeit einer Polymerkettenerweiterung nach der Polymerisation ist mit diesem System gegeben. Die Abbaubarkeit der Polydisulfide konnte durch den Einsatz von Thiolen im basischen Milieu gezeigt werden.