Refine
Has Fulltext
- yes (484) (remove)
Year of publication
- 2020 (484) (remove)
Document Type
- Postprint (302)
- Doctoral Thesis (101)
- Article (37)
- Working Paper (15)
- Monograph/Edited Volume (7)
- Master's Thesis (6)
- Report (4)
- Review (4)
- Part of Periodical (3)
- Habilitation Thesis (2)
Language
- English (484) (remove)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- diffusion (9)
- machine learning (7)
- climate change (6)
- Germany (5)
- diversity (5)
- prediction (5)
- remote sensing (5)
Institute
- Institut für Biochemie und Biologie (70)
- Institut für Physik und Astronomie (58)
- Institut für Geowissenschaften (54)
- Institut für Umweltwissenschaften und Geographie (30)
- Institut für Chemie (29)
- Institut für Mathematik (27)
- Strukturbereich Kognitionswissenschaften (27)
- Institut für Ernährungswissenschaft (23)
- Department Psychologie (18)
- Hasso-Plattner-Institut für Digital Engineering GmbH (17)
This paper addresses semantic/pragmatic variability of tag questions in German and makes three main contributions. First, we document the prevalence and variety of question tags in German across three different types of conversational corpora. Second, by annotating question tags according to their syntactic and semantic context, discourse function, and pragmatic effect, we demonstrate the existing overlap and differences between the individual tag variants. Finally, we distinguish several groups of question tags by identifying the factors that influence the speakers’ choices of tags in the conversational context, such as clause type, function, speaker/hearer knowledge, as well as conversation type and medium. These factors provide the limits of variability by constraining certain question tags in German against occurring in specific contexts or with individual functions.
Similar Yet Different
(2020)
The importance of intrinsically disordered late embryogenesis abundant (LEA) proteins in the tolerance to abiotic stresses involving cellular dehydration is undisputed. While structural transitions of LEA proteins in response to changes in water availability are commonly observed and several molecular functions have been suggested, a systematic, comprehensive and comparative study of possible underlying sequence-structure-function relationships is still lacking. We performed molecular dynamics (MD) simulations as well as spectroscopic and light scattering experiments to characterize six members of two distinct, lowly homologous clades of LEA_4 family proteins from Arabidopsis thaliana. We compared structural and functional characteristics to elucidate to what degree structure and function are encoded in LEA protein sequences and complemented these findings with physicochemical properties identified in a systematic bioinformatics study of the entire Arabidopsis thaliana LEA_4 family. Our results demonstrate that although the six experimentally characterized LEA_4 proteins have similar structural and functional characteristics, differences concerning their folding propensity and membrane stabilization capacity during a freeze/thaw cycle are obvious. These differences cannot be easily attributed to sequence conservation, simple physicochemical characteristics or the abundance of sequence motifs. Moreover, the folding propensity does not appear to be correlated with membrane stabilization capacity. Therefore, the refinement of LEA_4 structural and functional properties is likely encoded in specific patterns of their physicochemical characteristics.
Radical reactions have found many applications in carbohydrate chemistry, especially in the construction of carbon–carbon bonds. The formation of carbon–heteroatom bonds has been less intensively studied. This mini-review will summarize the efforts to add heteroatom radicals to unsaturated carbohydrates like endo-glycals. Starting from early examples, developed more than 50 years ago, the importance of such reactions for carbohydrate chemistry and recent applications will be discussed. After a short introduction, the mini-review is divided in sub-chapters according to the heteroatoms halogen, nitrogen, phosphorus, and sulfur. The mechanisms of radical generation by chemical or photochemical processes and the subsequent reactions of the radicals at the 1-position will be discussed. This mini-review cannot cover all aspects of heteroatom-centered radicals in carbohydrate chemistry, but should provide an overview of the various strategies and future perspectives
Plants located adjacent to agricultural fields are important for maintaining biodiversity in semi-natural landscapes. To avoid undesired impacts on these plants due to herbicide application on the arable fields, regulatory risk assessments are conducted prior to registration to ensure proposed uses of plant protection products do not present an unacceptable risk. The current risk assessment approach for these non-target terrestrial plants (NTTPs) examines impacts at the individual-level as a surrogate approach for protecting the plant community due to the inherent difficulties of directly assessing population or community level impacts. However, modelling approaches are suitable higher tier tools to upscale individual-level effects to community level. IBC-grass is a sophisticated plant community model, which has already been applied in several studies. However, as it is a console application software, it was not deemed sufficiently user-friendly for risk managers and assessors to be conveniently operated without prior expertise in ecological models. Here, we present a user-friendly and open source graphical user interface (GUI) for the application of IBC-grass in regulatory herbicide risk assessment. It facilitates the use of the plant community model for predicting long-term impacts of herbicide applications on NTTP communities. The GUI offers two options to integrate herbicide impacts: (1) dose responses based on current standard experiments (acc. to testing guidelines) and (2) based on specific effect intensities. Both options represent suitable higher tier options for future risk assessments of NTTPs as well as for research on the ecological relevance of effects.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
German orthography systematically marks all nouns (even other nominalized word classes) by capitalizing their first letter. It is often claimed that readers benefit from the uppercase-letter syntactic and semantic information, which makes the processing of sentences easier (e.g., Bock et al., 1985, 1989). In order to test this hypothesis, we asked 54 German readers to read single sentences systematically manipulated by a target word (N). In the experimental condition (EXP), we used semantic priming (in the following example: sick → cold) in order to build up a strong expectation of a noun, which was actually an attribute for the following noun (N+1) (translated to English e.g., “The sick writer had a cold (N) nose (N+1) …”). The sentences in the control condition were built analogously, but word N was purposefully altered (keeping word length and frequency constant) to make its interpretation as a noun extremely unlikely (e.g., “The sick writer had a blue (N) nose (N+1) …”). In both conditions, the sentences were presented either following German standard orthography (Cap) or in lowercase spelling (NoCap). The capitalized nouns in the EXP/Cap condition should then prevent garden-path parsing, as capital letters can be recognized parafoveally. However, in the EXP/NoCap condition, we expected a garden-path effect on word N+1 affecting first-pass fixations and the number of regressions, as the reader realizes that word N is instead an adjective. As the control condition does not include a garden-path, we expected to find (small) effects of the violation of the orthographic rule in the CON/NoCap condition, but no garden-path effect. As a global result, it can be stated that reading sentences in which nouns are not marked by a majuscule slows a native German reader down significantly, but from an absolute point of view, the effect is small. Compared with other manipulations (e.g., transpositions or substitutions), a lowercase letter still represents the correct allograph in the correct position without affecting phonology. Furthermore, most German readers do have experience with other alphabetic writing systems that lack consistent noun capitalization, and in (private) digital communication lowercase nouns are quite common. Although our garden-path sentences did not show the desired effect, we found an indication of grammatical pre-processing enabled by the majuscule in the regularly spelled sentences: In the case of high noun frequency, we post hoc located parafovea-on-fovea effects, i.e., longer fixation durations, on the attributive adjective (word N). These benefits of capitalization could only be detected under specific circumstances. In other cases, we conclude that longer reading durations are mainly the result of disturbance in readers' habituation when the expected capitalization is missing.
Commentary
(2020)
Recent studies have suggested that musical rhythm perception ability can affect the phonological system. The most prevalent causal account for developmental dyslexia is the phonological deficit hypothesis. As rhythm is a subpart of phonology, we hypothesized that reading deficits in dyslexia are associated with rhythm processing in speech and in music. In a rhythmic grouping task, adults with diagnosed dyslexia and age-matched controls listened to speech streams with syllables alternating in intensity, duration, or neither, and indicated whether they perceived a strong-weak or weak-strong rhythm pattern. Additionally, their reading and musical rhythm abilities were measured. Results showed that adults with dyslexia had lower musical rhythm abilities than adults without dyslexia. Moreover, lower musical rhythm ability was associated with lower reading ability in dyslexia. However, speech grouping by adults with dyslexia was not impaired when musical rhythm perception ability was controlled: like adults without dyslexia, they showed consistent preferences. However, rhythmic grouping was predicted by musical rhythm perception ability, irrespective of dyslexia. The results suggest associations among musical rhythm perception ability, speech rhythm perception, and reading ability. This highlights the importance of considering individual variability to better understand dyslexia and raises the possibility that musical rhythm perception ability is a key to phonological and reading acquisition.
Background
Multi-component cardiac rehabilitation (CR) is performed to achieve an improved prognosis, superior health-related quality of life (HRQL) and occupational resumption through the management of cardiovascular risk factors, as well as improvement of physical performance and patients’ subjective health. Out of a multitude of variables gathered at CR admission and discharge, we aimed to identify predictors of returning to work (RTW) and HRQL 6 months after CR.
Design
Prospective observational multi-centre study, enrolment in CR between 05/2017 and 05/2018.
Method
Besides general data (e.g. age, sex, diagnoses), parameters of risk factor management (e.g. smoking, hypertension), physical performance (e.g. maximum exercise capacity, endurance training load, 6-min walking distance) and patient-reported outcome measures (e.g. depression, anxiety, HRQL, subjective well-being, somatic and mental health, pain, lifestyle change motivation, general self-efficacy, pension desire and self-assessment of the occupational prognosis using several questionnaires) were documented at CR admission and discharge. These variables (at both measurement times and as changes during CR) were analysed using multiple linear regression models regarding their predictive value for RTW status and HRQL (SF-12) six months after CR.
Results
Out of 1262 patients (54±7 years, 77% men), 864 patients (69%) returned to work. Predictors of failed RTW were primarily the desire to receive pension (OR = 0.33, 95% CI: 0.22–0.50) and negative self-assessed occupational prognosis (OR = 0.34, 95% CI: 0.24–0.48) at CR discharge, acute coronary syndrome (OR = 0.64, 95% CI: 0.47–0.88) and comorbid heart failure (OR = 0.51, 95% CI: 0.30–0.87). High educational level, stress at work and physical and mental HRQL were associated with successful RTW. HRQL was determined predominantly by patient-reported outcome measures (e.g. pension desire, self-assessed health prognosis, anxiety, physical/mental HRQL/health, stress, well-being and self-efficacy) rather than by clinical parameters or physical performance.
Conclusion
Patient-reported outcome measures predominantly influenced return to work and HRQL in patients with heart disease. Therefore, the multi-component CR approach focussing on psychosocial support is crucial for subjective health prognosis and occupational resumption.
Soils in Germany are commonly low in selenium; consequently, a sufficient dietary supply is not always ensured. The extent of such provision adequacy is estimated by the optimal effect range of biomarkers, which often reflects the physiological requirement. Preceding epidemiological studies indicate that low selenium serum concentrations could be related to cardiovascular diseases. Inter alia, risk factors for cardiovascular diseases are physical inactivity, overweight, as well as disadvantageous eating habits. In order to assess whether these risk factors can be modulated, a cardio-protective diet comprising fixed menu plans combined with physical exercise was applied in the German MoKaRi (modulation of cardiovascular risk factors) intervention study. We analyzed serum samples of the MoKaRi cohort (51 participants) for total selenium, GPx activity, and selenoprotein P at different timepoints of the study (0, 10, 20, 40 weeks) to explore the suitability of these selenium-associated markers as indicators of selenium status. Overall, the time-dependent fluctuations in serum selenium concentration suggest a successful change in nutritional and lifestyle behavior. Compared to baseline, a pronounced increase in GPx activity and selenoprotein P was observed, while serum selenium decreased in participants with initially adequate serum selenium content. SELENOP concentration showed a moderate positive monotonic correlation (r = 0.467, p < 0.0001) to total Se concentration, while only a weak linear relationship was observed for GPx activity versus total Se concentration (r = 0.186, p = 0.021). Evidently, other factors apart from the available Se pool must have an impact on the GPx activity, leading to the conclusion that, without having identified these factors, GPx activity should not be used as a status marker for Se
Purpose: Psychosocial variables are known risk factors for the development and chronification of low back pain (LBP). Psychosocial stress is one of these risk factors. Therefore, this study aims to identify the most important types of stress predicting LBP. Self-efficacy was included as a potential protective factor related to both, stress and pain.
Participants and Methods: This prospective observational study assessed n = 1071 subjects with low back pain over 2 years. Psychosocial stress was evaluated in a broad manner using instruments assessing perceived stress, stress experiences in work and social contexts, vital exhaustion and life-event stress. Further, self-efficacy and pain (characteristic pain intensity and disability) were assessed. Using least absolute shrinkage selection operator regression, important predictors of characteristic pain intensity and pain-related disability at 1-year and 2-years follow-up were analyzed.
Results: The final sample for the statistic procedure consisted of 588 subjects (age: 39.2 (± 13.4) years; baseline pain intensity: 27.8 (± 18.4); disability: 14.3 (± 17.9)). In the 1-year follow-up, the stress types “tendency to worry”, “social isolation”, “work discontent” as well as vital exhaustion and negative life events were identified as risk factors for both pain intensity and pain-related disability. Within the 2-years follow-up, Lasso models identified the stress types “tendency to worry”, “social isolation”, “social conflicts”, and “perceived long-term stress” as potential risk factors for both pain intensity and disability. Furthermore, “self-efficacy” (“internality”, “self-concept”) and “social externality” play a role in reducing pain-related disability.
Conclusion: Stress experiences in social and work-related contexts were identified as important risk factors for LBP 1 or 2 years in the future, even in subjects with low initial pain levels. Self-efficacy turned out to be a protective factor for pain development, especially in the long-term follow-up. Results suggest a differentiation of stress types in addressing psychosocial factors in research, prevention and therapy approaches.
As an essential trace element, copper plays a pivotal role in physiological body functions. In fact, dysregulated copper homeostasis has been clearly linked to neurological disorders including Wilson and Alzheimer’s disease. Such neurodegenerative diseases are associated with progressive loss of neurons and thus impaired brain functions. However, the underlying mechanisms are not fully understood. Characterization of the element species and their subcellular localization is of great importance to uncover cellular mechanisms. Recent research activities focus on the question of how copper contributes to the pathological findings. Cellular bioimaging of copper is an essential key to accomplish this objective. Besides information on the spatial distribution and chemical properties of copper, other essential trace elements can be localized in parallel. Highly sensitive and high spatial resolution techniques such as LA-ICP-MS, TEM-EDS, S-XRF and NanoSIMS are required for elemental mapping on subcellular level. This review summarizes state-of-the-art techniques in the field of bioimaging. Their strengths and limitations will be discussed with particular focus on potential applications for the elucidation of copper-related diseases. Based on such investigations, further information on cellular processes and mechanisms can be derived under physiological and pathological conditions. Bioimaging studies might enable the clarification of the role of copper in the context of neurodegenerative diseases and provide an important basis to develop therapeutic strategies for reduction or even prevention of copper-related disorders and their pathological consequences.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
The Jewish family has been the subject of much admiration and analysis, criticism and myth-making, not just but especially in modern times. As a field of inquiry, its place is at the intersection – or in the shadow – of the great topics in Jewish Studies and its contributing disciplines. Among them are the modernization and privatization of Judaism and Jewish life; integration and distinctiveness of Jews as individuals and as a group; gender roles and education. These and related questions have been the focus of modern Jewish family research, which took shape as a discipline in the 1910s.
This issue of PaRDeS traces the origins of academic Jewish family research and takes stock of its development over a century, with its ruptures that have added to the importance of familial roots and continuities. A special section retrieves the founder of the field, Arthur Czellitzer (1871–1943), his biography and work from oblivion and places him in the context of early 20th-century science and Jewish life.
The articles on current questions of Jewish family history reflect the topic’s potential for shedding new light on key questions in Jewish Studies past and present. Their thematic range – from 13th-century Yiddish Arthurian romances via family-based business practices in 19th-century Hungary and Germany, to concepts of Jewish parenthood in Imperial Russia – illustrates the broad interest in Jewish family research as a paradigm for early modern and modern Jewish Studies.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
During the last decade, intracellular actin waves have attracted much attention due to their essential role in various cellular functions, ranging from motility to cytokinesis. Experimental methods have advanced significantly and can capture the dynamics of actin waves over a large range of spatio-temporal scales. However, the corresponding coarse-grained theory mostly avoids the full complexity of this multi-scale phenomenon. In this perspective, we focus on a minimal continuum model of activator–inhibitor type and highlight the qualitative role of mass conservation, which is typically overlooked. Specifically, our interest is to connect between the mathematical mechanisms of pattern formation in the presence of a large-scale mode, due to mass conservation, and distinct behaviors of actin waves.
The Ornstein–Uhlenbeck process is a stationary and ergodic Gaussian process, that is fully determined by its covariance function and mean. We show here that the generic definitions of the ensemble- and time-averaged mean squared displacements fail to capture these properties consistently, leading to a spurious ergodicity breaking. We propose to remedy this failure by redefining the mean squared displacements such that they reflect unambiguously the statistical properties of any stochastic process. In particular we study the effect of the initial condition in the Ornstein–Uhlenbeck process and its fractional extension. For the fractional Ornstein–Uhlenbeck process representing typical experimental situations in crowded environments such as living biological cells, we show that the stationarity of the process delicately depends on the initial condition.
The decision to exercise is not only bound to rational considerations but also automatic affective processes. The affective–reflective theory of physical inactivity and exercise (ART) proposes a theoretical framework for explaining how the automatic affective process (type‑1 process) will influence exercise behavior, i.e., through the automatic activation of exercise-related associations and a subsequent affective valuation of exercise. This study aimed to empirically test this assumption of the ART with data from 69 study participants. A single-measurement study, including within-subject experimental variation, was conducted. Automatic associations with exercise were first measured with a single-target implicit association test. The somato-affective core of the participants’ automatic valuation of exercise-related pictures was then assessed via heart rate variability (HRV) analysis, and the affective valence of the valuation was tested with a facial expression (FE; smile and frown) task. Exercise behavior was assessed via self-report. Multiple regression (path) analysis revealed that automatic associations predicted HRV reactivity (β = −0.24, p = .044); the signs of the correlation between automatic associations and the smile FE score was in the expected direction but remained nonsignificant (β = −0.21, p = .078). HRV reactivity predicted self-reported exercise behavior (β = −0.28, p = .013) (the same pattern of results was achieved for the frown FE score). The HRV-related results illustrate the potential role of automatic negative affective reactions to the thought of exercise as a restraining force in exercise motivation. For better empirical distinction between the two ART type‑1 process components, automatic associations and the affective valuation should perhaps be measured separately in the future. The results support the notion that automatic and affective processes should be regarded as essential aspects of the motivation to exercise.
Sedimentary ancient DNA has been proposed as a key methodology for reconstructing biodiversity over time. Yet, despite the concentration of Earth’s biodiversity in the tropics, this method has rarely been applied in this region. Moreover, the taphonomy of sedimentary DNA, especially in tropical environments, is poorly understood. This study elucidates challenges and opportunities of sedimentary ancient DNA approaches for reconstructing tropical biodiversity. We present shotgun-sequenced metagenomic profiles and DNA degradation patterns from multiple sediment cores from Mubwindi Swamp, located in Bwindi Impenetrable Forest (Uganda), one of the most diverse forests in Africa. We describe the taxonomic composition of the sediments covering the past 2200 years and compare the sedimentary DNA data with a comprehensive set of environmental and sedimentological parameters to unravel the conditions of DNA degradation. Consistent with the preservation of authentic ancient DNA in tropical swamp sediments, DNA concentration and mean fragment length declined exponentially with age and depth, while terminal deamination increased with age. DNA preservation patterns cannot be explained by any environmental parameter alone, but age seems to be the primary driver of DNA degradation in the swamp. Besides degradation, the presence of living microbial communities in the sediment also affects DNA quantity. Critically, 92.3% of our metagenomic data of a total 81.8 million unique, merged reads cannot be taxonomically identified due to the absence of genomic references in public databases. Of the remaining 7.7%, most of the data (93.0%) derive from Bacteria and Archaea, whereas only 0–5.8% are from Metazoa and 0–6.9% from Viridiplantae, in part due to unbalanced taxa representation in the reference data. The plant DNA record at ordinal level agrees well with local pollen data but resolves less diversity. Our animal DNA record reveals the presence of 41 native taxa (16 orders) including Afrotheria, Carnivora, and Ruminantia at Bwindi during the past 2200 years. Overall, we observe no decline in taxonomic richness with increasing age suggesting that several-thousand-year-old information on past biodiversity can be retrieved from tropical sediments. However, comprehensive genomic surveys of tropical biota need prioritization for sedimentary DNA to be a viable methodology for future tropical biodiversity studies.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Despite the increasing number of species invasions, the factors driving invasiveness are still under debate. This is particularly the case for “invisible” invasions by aquatic microbial species. Since in many cases only a few individuals or propagules enter a new habitat, their genetic variation is low and might limit their invasion success, known as the genetic bottleneck. Thus, a key question is, how genetic identity and diversity of invading species influences their invasion success and, subsequently, affect the resident community. We conducted invader-addition experiments using genetically different strains of the globally invasive, aquatic cyanobacterium Raphidiopsis raciborskii (formerly: Cylindrospermopsis raciborskii) to determine the role of invader identity and genetic diversity (strain richness) at four levels of herbivory. We tested the invasion success of solitary single strain invasions against the invader genetic diversity, which was experimentally increased up to ten strains (multi-strain populations). By using amplicon sequencing we determined the strain-specific invasion success in the multi-strain treatments and compared those with the success of these strains in the single-strain treatments. Furthermore, we tested for the invasion success under different herbivore pressures. We showed that high grazing pressure by a generalist herbivore prevented invasion, whereas a specialist herbivore enabled coexistence of consumer and invader. We found a weak effect of diversity on invasion success only under highly competitive conditions. When invasions were successful, the magnitude of this success was strain-specific and consistent among invasions performed with single-strain or multi-strain populations. A strain-specific effect was also observed on the resident phytoplankton community composition, highlighting the strong role of invader genetic identity. Our results point to a strong effect of the genetic identity on the invasion success under low predation pressure. The genetic diversity of the invader population, however, had little effect on invasion success in our study, in contrast to most previous findings. Instead, it is the interaction between the consumer abundance and type together with the strain identity of the invader that defined invasion success. This study underlines the importance of strain choice in invasion research and in ecological studies in general.
Orogenic peridotites represent portions of upper subcontinental mantle now incorporated in mountain belts. They often contain layers, lenses and irregular bodies of pyroxenite and eclogite. The origin of this heterogeneity and the nature of these layers is still debated but it is likely to involve processes such as transient melts coming from the crust or the mantle and segregating in magma conduits, crust-mantle interaction, upwelling of the asthenosphere and metasomatism. All these processes occur in the lithospheric mantle and are often related with the subduction of crustal rocks to mantle depths. In fact, during subduction, fluids and melts are released from the slab and can interact with the overlying mantle, making the study of deep melts in this environment crucial to understand mantle heterogeneity and crust-mantle interaction. The aim of this thesis is precisely to better constrain how such processes take place studying directly the melt trapped as primary inclusions in pyroxenites and eclogites. The Bohemian Massif, crystalline core of the Variscan belt, is targeted for these purposes because it contains orogenic peridotites with layers of pyroxenite and eclogite and other mafic rocks enclosed in felsic high pressure and ultra-high pressure crustal rocks. Within this Massif mafic rocks from two areas have been selected: the garnet clinopyroxenite in orogenic peridotite of the Granulitgebirge and the ultra-high pressure eclogite in the diamond-bearing gneisses of the Erzgebirge. In both areas primary melt inclusions were recognized in the garnet, ranging in size between 2-25 µm and with different degrees of crystallization, from glassy to polycrystalline. They have been investigated with Micro Raman spectroscopy and EDS mapping and the mineral assemblage is kumdykolite, phlogopite, quartz, kokchetavite, phase with a main Raman peak at 430 cm-1, phase with a main Raman peak at 412 cm-1, white mica and calcite with some variability in relative abundance depending on the case study. In the Granulitgebirge osumilite and pyroxene are also present, whereas calcite is one of the main phases in the Erzgebirge. The presence of glass and the mineral assemblage in the nanogranitoids suggest that they were former droplets of melt trapped in the garnet while it was growing. Glassy inclusions and re-homogenized nanogranitoids show a silicate melt that is granitic, hydrous, high in alkalis and weakly peraluminous. The melt is also enriched in both case studies in Cs, Pb, Rb, U, Th, Li and B suggesting the involvement of crustal component, i.e. white mica (main carrier of Cs, Pb, Rb, Li and B), and a fluid (Cs, Th and U) in the melt producing reaction. The whole rock in both cases mainly consists of garnet and clinopyroxene with, in Erzgebirge samples, the additional presence of quartz both in the matrix and as a polycrystalline inclusion in the garnet. The latter is interpreted as a quartz pseudomorph after coesite and occurs in the same microstructural position as the melt inclusions. Both rock types show a crustal and subduction zone signature with garnet and clinopyroxene in equilibrium. Melt was likely present during the metamorphic peak of the rock, as it occurs in garnet.
Our data suggest that the processes most likely responsible for the formation of the investigated rocks in both areas is a metasomatic reaction between a melt produced in the crust and mafic layers formerly located in the mantle wedge for the Granulitgebirge and in the subducted continental crust itself in the Erzgebirge. Thus metasomatism in the first case took place in the mantle overlying the slab, whereas in the second case metasomatism took place in the continental crust that already contained, before subduction, mafic layers. Moreover, the presence of former coesite in the same microstructural position of the melt inclusions in the Erzgebirge garnets suggest that metasomatism took place at ultra-high pressure conditions.
Summarizing, in this thesis we provide new insights into the geodynamic evolution of the Bohemian Massif based on the study of melt inclusions in garnet in two different mafic rock types, combining the direct microstructural and geochemical investigation of the inclusions with the whole-rock and mineral geochemistry. We report for the first time data, directly extracted from natural rocks, on the metasomatic melt responsible for the metasomatism of several areas of the Bohemian Massif. Besides the two locations here investigated, belonging to the Saxothuringian Zone, a signature similar to the investigated melt is clearly visible in pyroxenite and peridotite of the T-7 borehole (again Saxothuringian Zone) and the durbachite suite located in the Moldanubian Zone.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
The present paper is concerned with the phenomenon of reporting on the speakers’ thinking when both the reporting and the reported clauses originate in one and the same speaker, i.e. the performative uses of the verbs sp. creer and pt. achar (‘think’). The data are retrieved from the CdE-NOW and CdP-NOW. Adopting both a quantitative and a qualitative perspective, I concentrate on reporting on thinking with and without the overt expression of the subject pronouns sp. yo and pt. eu. In doing so, the constructions (yo) creo (que) and (eu) acho (que) as well as parenthetic and right-peripheral creo yo and acho eu are studied. According to the corpus data and compared to other possible constructions with creo and acho, creo que and acho que represent the most frequent constructions if searching for the ‘node’ creo or acho, that is, if the non-use of the subject pronoun exceeds its explicit expression.
This study presents the evaluation of a computer-based learning program for children with developmental dyscalculia and focuses on factors affecting individual responsiveness. The adaptive training program Calcularis 2.0 has been developed according to current neuro-cognitive theory of numerical cognition. It aims to automatize number representations, supports the formation and access to the mental number line and trains arithmetic operations as well as arithmetic fact knowledge in expanding number ranges. Sixty-seven children with developmental dyscalculia from second to fifth grade (mean age 8.96 years) were randomly assigned to one of two groups (Calcularis group, waiting control group). Training duration comprised a minimum of 42 training sessions à 20 min within a maximum period of 13 weeks. Compared to the waiting control group, children of the Calcularis group demonstrated a higher benefit in arithmetic operations and number line estimation. These improvements were shown to be stable after a 3-months post training interval. In addition, this study examines which predictors accounted for training improvements. Results indicate that this self-directed training was especially beneficial for children with low math anxiety scores and without an additional reading and/or spelling disorder. In conclusion, Calcularis 2.0 supports children with developmental dyscalculia to improve their arithmetical abilities and their mental number line representation. However, it is relevant to further adapt the setting to the individual circumstances.
Although a relatively large number of studies on acquired language impairments have tested the case of derivational morphology, none of these have specifically investigated whether there are differences in how prefixed and suffixed derived words are impaired. Based on linguistic and psycholinguistic considerations on prefixed and suffixed derived words, differences in how these two types of derivations are processed, and consequently impaired, are predicted. In the present study, we investigated the errors produced in reading aloud simple, prefixed, and suffixed words by three German individuals with agrammatic aphasia (NN, LG, SA). We found that, while NN and LG produced similar numbers of errors with prefixed and suffixed words, SA showed a selective impairment for prefixed words. Furthermore, NN and SA produced more errors specifically involving the affix with prefixed words than with suffixed words. We discuss our findings in terms of relative position of stem and affix in prefixed and suffixed words, as well as in terms of specific properties of prefixes and suffixes.
Objective: To determine immediate performance measures for short-term, multicomponent cardiac rehabilitation (CR) in clinical routine in patients of working age, taking into
account cardiovascular risk factors, physical performance, social medicine, and subjective health parameters and to explore the underlying dimensionality.
Design: Prospective observational multicenter register study in 12 rehabilitation centers throughout Germany.
Setting: Comprehensive 3-week CR.
Background
Aim of the study was to find predictors of allocating patients after transcatheter aortic valve implantation (TAVI) to geriatric (GR) or cardiac rehabilitation (CR) and describe this new patient group based on a differentiated characterization.
Methods
From 10/2013 to 07/2015, 344 patients with an elective TAVI were consecutively enrolled in this prospective multicentric cohort study. Before intervention, sociodemographic parameters, echocardiographic data, comorbidities, 6-min walk distance (6MWD), quality of life and frailty (score indexing activities of daily living [ADL], cognition, nutrition and mobility) were documented. Out of these, predictors for assignment to CR or GR after TAVI were identified using a multivariable regression model.
Results
After TAVI, 249 patients (80.7 ± 5.1 years, 59.0% female) underwent CR (n = 198) or GR (n = 51). GR patients were older, less physically active and more often had a level of care, peripheral artery disease as well as a lower left ventricular ejection fraction. The groups also varied in 6MWD. Furthermore, individual components of frailty revealed prognostic impact: higher values in instrumental ADL reduced the probability for referral to GR (OR:0.49, p < 0.001), while an impaired mobility was positively associated with referral to GR (OR:3.97, p = 0.046). Clinical parameters like stroke (OR:0.19 of GR, p = 0.038) and the EuroSCORE (OR:1.04 of GR, p = 0.026) were also predictive.
Conclusion
Advanced age patients after TAVI referred to CR or GR differ in several parameters and seem to be different patient groups with specific needs, e.g. regarding activities of daily living and mobility. Thus, our data prove the eligibility of both CR and GR settings.
Climate change heavily impacts smallholder farming worldwide. Cross-scale vulnerability assessment has a high potential to identify nested measures for reducing vulnerability of smallholder farmers. Despite their high practical value, there are currently only limited examples of cross-scale assessments. The presented study aims at assessing the vulnerability of smallholder farmers in the Northeast of Brazil across three scales: regional, farm and field scale. In doing so, it builds on existing vulnerability indices and compares results between indices at the same scale and across scales. In total, six independent indices are tested, two at each scale. The calculated indices include social, economic and ecological indicators, based on municipal statistics, meteorological data, farm interviews and soil analyses. Subsequently, indices and overlapping indicators are normalized for intra- and cross-scale comparison. The results show considerable differences between indices across and within scales. They indicate different activities to reduce vulnerability of smallholder farmers. Major shortcomings arise from the conceptual differences between the indices. We therefore recommend the development of hierarchical indices, which are adapted to local conditions and contain more overlapping indicators for a better understanding of the nested vulnerabilities of smallholder farmers.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
Remembering German-Australian Colonial Entanglements emphatically promotes a critical and nuanced understanding of the complex entanglement of German colonial actors and activities within Australian colonial institutions and different imperial ideologies. Case studies ranging from the German reception of James Cook’s voyages through to the legacies of 19th- and 20th-century settler colonialism foreground the highly ambiguous roles played by explorers, missionaries, intellectuals and other individuals, as well as by objects and things that travelled between worlds – ancestral human remains, rare animal skins, songs, and even military tanks. The chapters foreground the complex relationship between science, religion, art and exploitation, displacement and annihilation.
Social comparison processes and the social position within a school class already play a major role in performance evaluation as early as in elementary school. The influence of contrast and assimilation effects on self-evaluation of performance as well as task interest has been widely researched in observational studies under the labels big-fish-little-pond and basking-in-reflected-glory effect. This study examined the influence of similar contrast and assimilation effects in an experimental paradigm. Fifth and sixth grade students (n = 230) completed a computer-based learning task during which they received social comparative feedback based on 2 × 2 experimentally manipulated feedback conditions: social position (high vs. low) and peer performance (high vs. low). Results show a more positive development of task interest and self-evaluation of performance in both the high social position and the high peer performance condition. When applied to the school setting, results of this study suggest that students who already perform well in comparison to their peer group are also the ones who profit most from social comparative feedback, given that they are the ones who usually receive the corresponding positive performance feedback.
This paper evaluates the construction of the rights of human rights defenders within international law and its shortcomings in protecting women. Human rights defenders have historically been defined on the basis of their actions as defenders. However, as Marxist-feminist scholar Silvia Federici contends, women are inherently politicised and, moreover, face obstacles to political action which are invisible to and untouchable by the law. Labour rights set an example of handling such a disadvantaged political position by placing vital importance on workers’ right to association and collective action. The paper closes with the suggestion that transposing this construction of rights to women would better protect women as human rights defenders while emphasising their capacity for self-determination in their political actions.
The Research Data Policy of the University of Potsdam has been ratified by Senate on September 25, 2019 and published in Amtliche Bekanntmachungen “Official Notices” September 30, 2019. It applies to all researchers and research support staff.
The Recommendations for the Handling of Research Data at the University of Potsdam specify and complement the Research Data Policy of the University of Potsdam. They are aimed at all researchers and research support staff have been adopted by Senate’s Commission for Research and Junior Academics (FNK) on on October 9, 2019.
This record provides a non-official translation of both documents from the German original.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Spiked gold nanotriangles
(2020)
We show the formation of metallic spikes on the surface of gold nanotriangles (AuNTs) by using the same reduction process which has been used for the synthesis of gold nanostars. We confirm that silver nitrate operates as a shape-directing agent in combination with ascorbic acid as the reducing agent and investigate the mechanism by dissecting the contribution of each component, i.e., anionic surfactant dioctyl sodium sulfosuccinate (AOT), ascorbic acid (AA), and AgNO3. Molecular dynamics (MD) simulations show that AA attaches to the AOT bilayer of nanotriangles, and covers the surface of gold clusters, which is of special relevance for the spike formation process at the AuNT surface. The surface modification goes hand in hand with a change of the optical properties. The increased thickness of the triangles and a sizeable fraction of silver atoms covering the spikes lead to a blue-shift of the intense near infrared absorption of the AuNTs. The sponge-like spiky surface increases both the surface enhanced Raman scattering (SERS) cross section of the particles and the photo-catalytic activity in comparison with the unmodified triangles, which is exemplified by the plasmon-driven dimerization of 4-nitrothiophenol (4-NTP) to 4,4'-dimercaptoazobenzene (DMAB).
Magnetite containing aerogels were synthesized by freeze-drying olive oil/silicone oil-based Janus emulsion gels containing gelatin and sodium carboxymethylcellulose (NaCMC). The magnetite nanoparticles dispersed in olive oil are processed into the gel and remain in the macroporous aerogel after removing the oil components. The coexistence of macropores from the Janus droplets and mesopores from freeze-drying of the hydrogels in combination with the magnetic properties offer a special hierarchical pore structure, which is of relevance for smart supercapacitors, biosensors, and spilled oil sorption and separation. The morphology of the final structure was investigated in dependence on initial compositions. More hydrophobic aerogels with magnetic responsiveness were synthesized by bisacrylamide-crosslinking of the hydrogel. The crosslinked aerogels can be successfully used in magnetically responsive clean up experiments of the cationic dye methylene blue.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
In this paper, we present the convergence rate analysis of the modified Landweber method under logarithmic source condition for nonlinear ill-posed problems. The regularization parameter is chosen according to the discrepancy principle. The reconstructions of the shape of an unknown domain for an inverse potential problem by using the modified Landweber method are exhibited.
Differentially-charged liposomes interact with alphaherpesviruses and interfere with virus entry
(2020)
Exposure of phosphatidylserine (PS) in the outer leaflet of the plasma membrane is induced by infection with several members of the Alphaherpesvirinae subfamily. There is evidence that PS is used by the equine herpesvirus type 1 (EHV-1) during entry, but the exact role of PS and other phospholipids in the entry process remains unknown. Here, we investigated the interaction of differently charged phospholipids with virus particles and determined their influence on infection. Our data show that liposomes containing negatively charged PS or positively charged DOTAP (N-[1-(2,3-Dioleoyloxy)propyl]-N,N,N-trimethylammonium) inhibited EHV-1 infection, while neutral phosphatidylcholine (PC) had no effect. Inhibition of infection with PS was transient, decreased with time, and was dose dependent. Our findings indicate that both cationic and anionic phospholipids can interact with the virus and reduce infectivity, while, presumably, acting through different mechanisms. Charged phospholipids were found to have antiviral effects and may be used to inhibit EHV-1 infection.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
Electrochemical methods offer the simple characterization of the synthesis of molecularly imprinted polymers (MIPs) and the readouts of target binding. The binding of electroinactive analytes can be detected indirectly by their modulating effect on the diffusional permeability of a redox marker through thin MIP films. However, this process generates an overall signal, which may include nonspecific interactions with the nonimprinted surface and adsorption at the electrode surface in addition to (specific) binding to the cavities. Redox-active low-molecular-weight targets and metalloproteins enable a more specific direct quantification of their binding to MIPs by measuring the faradaic current. The in situ characterization of enzymes, MIP-based mimics of redox enzymes or enzyme-labeled targets, is based on the indication of an electroactive product. This approach allows the determination of both the activity of the bio(mimetic) catalyst and of the substrate concentration.
The other-race effect (ORE) can be described as difficulties in discriminating between faces of ethnicities other than one’s own, and can already be observed at approximately 9 months of age. Recent studies also showed that infants visually explore same-and other-race faces differently. However, it is still unclear whether infants’ looking behavior for same- and other-race faces is related to their face discrimination abilities. To investigate this question we conducted a habituation–dishabituation experiment to examine Caucasian 9-month-old infants’ gaze behavior, and their discrimination of same- and other-race faces, using eye-tracking measurements. We found that infants looked longer at the eyes of same-race faces over the course of habituation, as compared to other-race faces. After habituation, infants demonstrated a clear other-race effect by successfully discriminating between same-race faces, but not other-race faces. Importantly, the infants’ ability to discriminate between same-race faces significantly correlated with their fixation time towards the eyes of same-race faces during habituation. Thus, our findings suggest that for infants old enough to begin exhibiting the ORE, gaze behavior during habituation is related to their ability to differentiate among same-race faces, compared to other-race faces.
Literacy acquisition is one of the primary goals of school education, and usually it takes place in the national language of the respective country. At the same time, schools accommodate pupils with different home languages who might or might not be fluent in the national language and who start from other linguistic backgrounds in their acquisition of literacy. While it is safe to say that schools with a monolingual policy are not prepared to deal with the factual multilingualism in their classrooms in a systematic way, bilingual pupils have to deal with it nonetheless.
The interdisciplinary and comparative research project “Literacy Acquisition in Schools in the Context of Migration and Multilingualism” (LAS) investigated the practical processes of literacy acquisition in two countries, Germany and Turkey, where the monolingual orientation of schools is as much a reality as are the multilingual backgrounds of many of their pupils. The basic assumption was that pupils cope with the ways they are engaged by the school – both socially and academically – based on their cultural and linguistic repertoires acquired biographically, providing them with more or less productive options regarding the acquisition of literary skills. By comparing the literary development of bilingual children with that of their monolingual classmates throughout one school year in the first and the seventh grade in Germany and Turkey, respectively, we found out that the restricting potential of multilingualism is located rather on the part of the schools than on the part of the pupils. While the individual bilingual pupil almost naturally uses his/her home language as a resource for literacy acquisition in the school language, schools still tend to regard the multilingual backgrounds of their pupils as irrelevant or even as an impediment to adequate schooling. We argue that by ignoring or even suppressing the specific linguistic potentials of bilingualism, bilingual pupils are put at a structural disadvantage.
This research report is the slightly revised but full version of the final study project report from 2011 that was until now not available as a quotable publication. While several years have passed since the primary research was finalized, the addressed issues have lost none of their relevance. The report is accompanied by numerous publications in the frame of the LAS project, as well as by a web page (https://www.uni-potsdam.de/de/daf/projekte/las), which also contains the presentations from the final LAS-Conference, including valuable discussions of the report from renowed experts in the field.
Botulinum neurotoxins (BoNTs) are potent neurotoxins produced by bacteria, which inhibit neurotransmitter release, specifically in their physiological target known as motor neurons (MNs). For the potency assessment of BoNTs produced for treatment in traditional and aesthetic medicine, the mouse lethality assay is still used by the majority of manufacturers, which is ethically questionable in terms of the 3Rs principle. In this study, MNs were differentiated from human induced pluripotent stem cells based on three published protocols. The resulting cell populations were analyzed for their MN yield and their suitability for the potency assessment of BoNTs. MNs produce specific gangliosides and synaptic proteins, which are bound by BoNTs in order to be taken up by receptor-mediated endocytosis, which is followed by cleavage of specific soluble N-ethylmaleimide-sensitive-factor attachment receptor (SNARE) proteins required for neurotransmitter release. The presence of receptors and substrates for all BoNT serotypes was demonstrated in MNs generated in vitro. In particular, the MN differentiation protocol based on Du et al. yielded high numbers of MNs in a short amount of time with high expression of BoNT receptors and targets. The resulting cells are more sensitive to BoNT/A1 than the commonly used neuroblastoma cell line SiMa. MNs are, therefore, an ideal tool for being combined with already established detection methods.
Large emissions
(2020)
Pinned Gibbs processes
(2020)
All you can feed
(2020)
The laboratory mouse is the most common used mammalian research model in biomedical research. Usually these animals are maintained in germ-free, gnotobiotic, or specific-pathogen-free facilities. In these facilities, skilled staff takes care of the animals and scientists usually don’t pay much attention about the formulation and quality of diets the animals receive during normal breeding and keeping. However, mice have specific nutritional requirements that must be met to guarantee their potential to grow, reproduce and to respond to pathogens or diverse environmental stress situations evoked by handling and experimental interventions. Nowadays, mouse diets for research purposes are commercially manufactured in an industrial process, in which the safety of food products is addressed through the analysis and control of all biological and chemical materials used for the different diet formulations. Similar to human food, mouse diets must be prepared under good sanitary conditions and truthfully labeled to provide information of all ingredients. This is mandatory to guarantee reproducibility of animal studies. In this review, we summarize some information on mice research diets and general aspects of mouse nutrition including nutrient requirements of mice, leading manufacturers of diets, origin of nutrient compounds, and processing of feedstuffs for mice including dietary coloring, autoclaving and irradiation. Furthermore, we provide some critical views on the potential pitfalls that might result from faulty comparisons of grain-based diets with purified diets in the research data production resulting from confounding nutritional factors.
Interplay of Dietary Fatty Acids and Cholesterol Impacts Brain Mitochondria and Insulin Action
(2020)
Overconsumption of high-fat and cholesterol-containing diets is detrimental for metabolism and mitochondrial function, causes inflammatory responses and impairs insulin action in peripheral tissues. Dietary fatty acids can enter the brain to mediate the nutritional status, but also to influence neuronal homeostasis. Yet, it is unclear whether cholesterol-containing high-fat diets (HFDs) with different combinations of fatty acids exert metabolic stress and impact mitochondrial function in the brain. To investigate whether cholesterol in combination with different fatty acids impacts neuronal metabolism and mitochondrial function, C57BL/6J mice received different cholesterol-containing diets with either high concentrations of long-chain saturated fatty acids or soybean oil-derived poly-unsaturated fatty acids. In addition, CLU183 neurons were stimulated with combinations of palmitate, linoleic acid and cholesterol to assess their effects on metabolic stress, mitochondrial function and insulin action. The dietary interventions resulted in a molecular signature of metabolic stress in the hypothalamus with decreased expression of occludin and subunits of mitochondrial electron chain complexes, elevated protein carbonylation, as well as c-Jun N-terminal kinase (JNK) activation. Palmitate caused mitochondrial dysfunction, oxidative stress, insulin and insulin-like growth factor-1 (IGF-1) resistance, while cholesterol and linoleic acid did not cause functional alterations. Finally, we defined insulin receptor as a novel negative regulator of metabolically stress-induced JNK activation.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).
Using quantile regression methods, this paper analyses the gender wage gap across the wage distribution and over time (1990–2014), while controlling for changing sample selection into full-time employment. Our findings show that the selection-corrected gender wage gap is much larger than the one observed in the data, which is mainly due to large positive selection of women into full-time employment. However, we show that selection-corrected wages of male and female workers at the lower half of the distribution have moderately converged over time. The reason for this development have been changes in the composition of the male full-time employment force over time, which in spite of the rather constant male full-time employment rate, have given place to a small but rising selection bias in male observed wages. In the upper half of the wage distribution, however, neither the observed nor the selection-corrected gender wage gap has narrowed over time.
It is well known that the inverted Collatz sequence can be represented as a graph or a tree. Similarly, it is acknowledged that in order to prove the Collatz conjecture, one must demonstrate that this tree covers all (odd) natural numbers. A structured reachability analysis is hitherto not available. This paper investigates the problem from a graph theory perspective. We define a tree that consists of nodes labeled with Collatz sequence numbers. This tree will be transformed into a sub-tree that only contains odd labeled nodes. The analysis of this tree will provide new insights into the structure of Collatz sequences. The findings are of special interest to possible cycles within a sequence. Next, we describe the conditions which must be fulfilled by a cycle. Finally, we demonstrate how these conditions could be used to prove that the only possible cycle within a Collatz sequence is the trivial cycle, starting with the number 1, as conjectured by Lothar Collatz.
It is well known that the inverted Collatz sequence can be represented as a graph or a tree. Similarly, it is acknowledged that in order to prove the Collatz conjecture, one must demonstrate that this tree covers all odd natural numbers. A structured reachability analysis is hitherto not available. This paper investigates the problem from a graph theory perspective. We define a tree that consists of nodes labeled with Collatz sequence numbers. This tree will be transformed into a sub-tree that only contains odd labeled nodes. The analysis of this tree will provide new insights into the structure of Collatz sequences. The findings are of special interest to possible cycles within a sequence. Next, we describe the conditions which must be fulfilled by a cycle. Finally, we demonstrate how these conditions could be used to prove that the only possible cycle within a Collatz sequence is the trivial cycle, starting with the number one, as conjectured by Lothar Collatz.
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
This dissertation aims to deliver a transcendental interpretation of Immanuel Kant's Kritik der Urteilskraft, considering both its coherence with other critical works as well as the internal coherence of the work itself. This interpretation is called transcendental insofar as special emphasis is placed on the newly introduced cognitive power, namely the reflective power of judgement, guided by the a priori principle of purposiveness. In this way the seeming manifold of themes, varying from judgements of taste through culture to teleological judgements about natural purposes, are discussed exclusively in regard of their dependence on this faculty and its transcendental principle. In contrast, in contemporary scholarship the book is often treated as a fragmented work, consisting of different independent parts, while my focus lies on the continuity comprised primarily of the activity of the power of judgement.
Going back to certain central yet silently presupposed concepts, adopted from previous critical works, the main contribution of this study is to integrate the KU within the overarching critical project. More specifically, I have argue how the need for the presupposition by the reflective power of judgement follows from the peculiar character of our sense-dependent discursive mind. Because we are sense-dependent discursive minds, we do not and cannot have immediate insight into all of nature's features. The particular constitution of our mind rather demands conceptually informed representations which mediately refer to objects.
Having said that, the principle of purposiveness, namely the presupposition that nature is organized in concert with the particular constitution of our mind, is a necessary condition for the possibility of reflection on nature's empirical features. Reflection refers on my account to a process of selecting features in order to allow a classification, including reflection on the method, means and selection criteria. Rather than directly contributing to cognition, like the categories, reflective judgements thus express our ignorance when it comes to the motivation behind nature's design, and this is most forcefully expressed by judgements of taste and teleological judgements about organized matter. In this way, reflection, regardless whether it is manifested in concept acquisition, scientific systematization, judgements of taste or judgements about organized matter, relies on a principle of the power of judgement which is revealed and justified in this transcendental inquiry.
The development of bioinspired self-assembling materials, such as hydrogels, with promising applications in cell culture, tissue engineering and drug delivery is a current focus in material science. Biogenic or bioinspired proteins and peptides are frequently used as versatile building blocks for extracellular matrix (ECM) mimicking hydrogels. However, precisely controlling and reversibly tuning the properties of these building blocks and the resulting hydrogels remains challenging. Precise control over the viscoelastic properties and self-healing abilities of hydrogels are key factors for developing intelligent materials to investigate cell matrix interactions. Thus, there is a need to develop building blocks that are self-healing, tunable and self-reporting. This thesis aims at the development of α-helical peptide building blocks, called coiled coils (CCs), which integrate these desired properties. Self-healing is a direct result of the fast self-assembly of these building blocks when used as material cross-links. Tunability is realized by means of reversible histidine (His)-metal coordination bonds. Lastly, implementing a fluorescent readout, which indicates the CC assembly state, self-reporting hydrogels are obtained.
Coiled coils are abundant protein folding motifs in Nature, which often have mechanical function, such as in myosin or fibrin. Coiled coils are superhelices made up of two or more α-helices wound around each other. The assembly of CCs is based on their repetitive sequence of seven amino acids, so-called heptads (abcdefg). Hydrophobic amino acids in the a and d position of each heptad form the core of the CC, while charged amino acids in the e and g position form ionic interactions. The solvent-exposed positions b, c and f are excellent targets for modifications since they are more variable. His-metal coordination bonds are strong, yet reversible interactions formed between the amino acid histidine and transition metal ions (e.g. Ni2+, Cu2+ or Zn2+). His-metal coordination bonds essentially contribute to the mechanical stability of various high-performance proteinaceous materials, such as spider fangs, Nereis worm jaws and mussel byssal threads. Therefore, I bioengineered reversible His-metal coordination sites into a well-characterized heterodimeric CC that served as tunable material cross-link. Specifically, I took two distinct approaches facilitating either intramolecular (Chapter 4.2) and/or intermolecular (Chapter 4.3) His-metal coordination.
Previous research suggested that force-induced CC unfolding in shear geometry starts from the points of force application. In order to tune the stability of a heterodimeric CC in shear geometry, I inserted His in the b and f position at the termini of force application (Chapter 4.2). The spacing of His is such that intra-CC His-metal coordination bonds can form to bridge one helical turn within the same helix, but also inter-CC coordination bonds are not generally excluded. Starting with Ni2+ ions, Raman spectroscopy showed that the CC maintained its helical structure and the His residues were able to coordinate Ni2+. Circular dichroism (CD) spectroscopy revealed that the melting temperature of the CC increased by 4 °C in the presence of Ni2+. Using atomic force microscope (AFM)-based single molecule force spectroscopy, the energy landscape parameters of the CC were characterized in the absence and the presence of Ni2+. His-Ni2+ coordination increased the rupture force by ~10 pN, accompanied by a decrease of the dissociation rate constant. To test if this stabilizing effect can be transferred from the single molecule level to the bulk viscoelastic material properties, the CC building block was used as a non-covalent cross-link for star-shaped poly(ethylene glycol) (star-PEG) hydrogels. Shear rheology revealed a 3-fold higher relaxation time in His-Ni2+ coordinating hydrogels compared to the hydrogel without metal ions. This stabilizing effect was fully reversible when using an excess of the metal chelator ethylenediaminetetraacetate (EDTA). The hydrogel properties were further investigated using different metal ions, i.e. Cu2+, Co2+ and Zn2+. Overall, these results suggest that Ni2+, Cu2+ and Co2+ primarily form intra-CC coordination bonds while Zn2+ also participates in inter-CC coordination bonds. This may be a direct result of its different coordination geometry.
Intermolecular His-metal coordination bonds in the terminal regions of the protein building blocks of mussel byssal threads are primarily formed by Zn2+ and were found to be intimately linked to higher-order assembly and self-healing of the thread. In the above example, the contribution of intra-CC and inter-CC His-Zn2+ cannot be disentangled. In Chapter 4.3, I redesigned the CC to prohibit the formation of intra-CC His-Zn2+ coordination bonds, focusing only on inter-CC interactions. Specifically, I inserted His in the solvent-exposed f positions of the CC to focus on the effect of metal-induced higher-order assembly of CC cross-links. Raman and CD spectroscopy revealed that this CC building block forms α-helical Zn2+ cross-linked aggregates. Using this CC as a cross-link for star-PEG hydrogels, I showed that the material properties can be switched from viscoelastic in the absence of Zn2+ to elastic-like in the presence of Zn2+. Moreover, the relaxation time of the hydrogel was tunable over three orders of magnitude when using different Zn2+:His ratios. This tunability is attributed to a progressive transformation of single CC cross-links into His-Zn2+ cross-linked aggregates, with inter-CC His-Zn2+ coordination bonds serving as an additional, cross-linking mode.
Rheological characterization of the hydrogels with inter-CC His-Zn2+ coordination raised the question whether the His-Zn2+ coordination bonds between CCs or also the CCs themselves rupture when shear strain is applied. In general, the amount of CC cross-links initially formed in the hydrogel as well as the amount of CC cross-links breaking under force remains to be elucidated. In order to more deeply probe these questions and monitor the state of the CC cross-links when force is applied, a fluorescent reporter system based on Förster resonance energy transfer (FRET) was introduced into the CC (Chapter 4.4). For this purpose, the donor-acceptor pair carboxyfluorescein and tetramethylrhodamine was used. The resulting self-reporting CC showed a FRET efficiency of 77 % in solution. Using this fluorescently labeled CC as a self-reporting, reversible cross-link in an otherwise covalently cross-linked star-PEG hydrogel enabled the detection of the FRET efficiency change under compression force. This proof-of-principle result sets the stage for implementing the fluorescently labeled CCs as molecular force sensors in non-covalently cross-linked hydrogels.
In summary, this thesis highlights that rationally designed CCs are excellent reversibly tunable, self-healing and self-reporting hydrogel cross-links with high application potential in bioengineering and biomedicine. For the first time, I demonstrated that His-metal coordination-based stabilization can be transferred from the single CC level to the bulk material with clear viscoelastic consequences. Insertion of His in specific sequence positions was used to implement a second non-covalent cross-linking mode via intermolecular His-metal coordination. This His-metal binding induced aggregation of the CCs enabled for reversibly tuning the hydrogel properties from viscoelastic to elastic-like. As a proof-of-principle to establish self-reporting CCs as material cross-links, I labeled a CC with a FRET pair. The fluorescently labelled CC acts as a molecular force sensor and first preliminary results suggest that the CC enables the detection of hydrogel cross-link failure under compression force. In the future, fluorescently labeled CC force sensors will likely not only be used as intelligent cross-links to study the failure of hydrogels but also to investigate cell-matrix interactions in 3D down to the single molecule level.
We consider the emerging dynamics of a separable continuous time random walk (CTRW) in the case when the random walker is biased by a velocity field in a uniformly growing domain. Concrete examples for such domains include growing biological cells or lipid vesicles, biofilms and tissues, but also macroscopic systems such as expanding aquifers during rainy periods, or the expanding Universe. The CTRW in this study can be subdiffusive, normal diffusive or superdiffusive, including the particular case of a Lévy flight. We first consider the case when the velocity field is absent. In the subdiffusive case, we reveal an interesting time dependence of the kurtosis of the particle probability density function. In particular, for a suitable parameter choice, we find that the propagator, which is fat tailed at short times, may cross over to a Gaussian-like propagator. We subsequently incorporate the effect of the velocity field and derive a bi-fractional diffusion-advection equation encoding the time evolution of the particle distribution. We apply this equation to study the mixing kinetics of two diffusing pulses, whose peaks move towards each other under the action of velocity fields acting in opposite directions. This deterministic motion of the peaks, together with the diffusive spreading of each pulse, tends to increase particle mixing, thereby counteracting the peak separation induced by the domain growth. As a result of this competition, different regimes of mixing arise. In the case of Lévy flights, apart from the non-mixing regime, one has two different mixing regimes in the long-time limit, depending on the exact parameter choice: in one of these regimes, mixing is mainly driven by diffusive spreading, while in the other mixing is controlled by the velocity fields acting on each pulse. Possible implications for encounter–controlled reactions in real systems are discussed.
Paid parental leave schemes have been shown to increase women’s employment rates but decrease their wages in case of extended leave durations. In view of these potential trade-offs, many countries are discussing the optimal design of parental leave policies. We analyze the impact of a major parental leave reform on mothers’ long-term earnings. The 2007 German parental leave reform replaced a means-tested benefit with a more generous earnings-related benefit that is granted for a shorter period of time. Additionally, a “daddy quota” of two months was introduced. To identify the causal effect of this policy on long-run earnings of mothers, we use a difference-in-difference approach that compares labor market outcomes of mothers who gave birth just before and right after the reform and nets out seasonal effects by including the year before. Using administrative social security data, we confirm previous findings and show that the average duration of employment interruptions increased for high-income mothers. Nevertheless, we find a positive long-run effect on earnings for mothers in this group. This effect cannot be explained by changes in working hours, observed characteristics, changes in employer stability or fertility patterns. Descriptive evidence suggests that the stronger involvement of fathers, incentivized by the “daddy months”, could have facilitated mothers’ re-entry into the labor market and thereby increased earnings. For mothers with low prior-to-birth earnings, however, we do not find any beneficial labor market effects of this parental leave reform.
Sediment Transit Time and Floodplain Storage Dynamics in Alluvial Rivers Revealed by Meteoric 10Be
(2020)
Quantifying the time scales of sediment transport and storage through river systems is fundamental for understanding weathering processes, biogeochemical cycling, and improving watershed management, but measuring sediment transit time is challenging. Here we provide the first systematic test of measuring cosmogenic meteoric Beryllium-10 (10Bem) in the sediment load of a large alluvial river to quantify sediment transit times. We take advantage of a natural experiment in the Rio Bermejo, a lowland alluvial river traversing the east Andean foreland basin in northern Argentina. This river has no tributaries along its trunk channel for nearly 1,300 km downstream from the mountain front. We sampled suspended sediment depth profiles along the channel and measured the concentrations of 10Bem in the chemically extracted grain coatings. We calculated depth-integrated 10Bem concentrations using sediment flux data and found that 10Bem concentrations increase 230% from upstream to downstream, indicating a mean total sediment transit time of 8.4 ± 2.2 kyr. Bulk sediment budget-based estimates of channel belt and fan storage times suggest that the 10Bem tracer records mixing of old and young sediment reservoirs. On a reach scale, 10Bem transit times are shorter where the channel is braided and superelevated above the floodplain, and longer where the channel is incised and meandering, suggesting that transit time is controlled by channel morphodynamics. This is the first systematic application of 10Bem as a sediment transit time tracer and highlights the method's potential for inferring sediment routing and storage dynamics in large river systems.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Nanoporous carbon materials (NCMs) provide the "function" of high specific surface area and thus have large interface area for interactions with surrounding species, which is of particular importance in applications related to adsorption processes. The strength and mechanism of adsorption depend on the pore architecture of the NCMs. In addition, chemical functionalization can be used to induce changes of electron density and/or electron density distribution in the pore walls, thus further modifying the interactions between carbons and guest species. Typical approaches for functionalization of nanoporous materials with regular atomic construction like porous silica, metal-organic frameworks, or zeolites, cannot be applied to NCMs due to their less defined local atomic construction and abundant defects. Therefore, synthetic strategies that offer a higher degree of control over the process of functionalization are needed. Synthetic approaches for covalent functionalization of NCMs, that is, for the incorporation of heteroatoms into the carbon backbone, are critically reviewed with a special focus on strategies following the concept "from molecules to materials." Approaches for coordinative functionalization with metallic species, and the functionalization by nanocomposite formation between pristine carbon materials and heteroatom-containing carbons, are introduced as well. Particular focus is given to the influences of these functionalizations in adsorption-related applications.
The involvement of the two German states in Korea during the 1950s in the context of the Cold War
(2020)
This master thesis will analyze the background of the involvement of the Federal Republic of Germany (FRG) and the German Democratic Republic (GDR) in Korea during the 1950s in the context of the Cold War. In both Korean states, the Democratic People’s Republic of Korea (DPRK) as well as the Republic of Korea (ROK), the so-called humanitarian aid that was provided to them in the form of medical and economic assistance to help surmount the hardship of the postwar period is remembered with great appreciation to this day. However, critical views on the German engagement in Korea are still relatively hard to find. In this paper, two exemplary cases will be studied: the GDR’s city reconstruction project in the North Korean cities of Hamheung and Heungnam and the FRG’s medical assistance to the ROK by means of the West German Red Cross Hospital in Busan. By looking at primary sources like governmental documents, this thesis will examine the geopolitical conditions and particular national interests that stood behind the German development and humanitarian aid for the Korean states at that time, thus shedding light on the political goals the two German states pursued, and the benefit they expected to derive from their engagement in Korea. Sources consulted include primary archival materials, secondary sources like monographs, journal articles, contemporary newspaper articles, and interviews with contemporary witnesses.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
In this paper the use of two different scaffolds in a seminar on the topic of heterocycles is discussed. The students first used both scaffolds (stepped supporting tools and a task navigator) on two tasks and could then choose for one other task the scaffold that suited them more. The scaffolds were evaluated in a mixedmethods study by the use of questionnaires and the conducting of a focus group interview. Both scaffolds were assessed as being helpful. However, students who thought they didn’t need different sorts of tips, as provided by the task navigator, chose the stepped supporting tools. All students reflected on their use of the scaffolds; their choices for one of both are therefore well-founded. As the reasons for choosing the scaffold are very individual, in future seminars both types of scaffolds will be provided.
We study the experimentally measured ciprofloxacin antibiotic diffusion through a gel-like artificial sputum medium (ASM) mimicking physiological conditions typical for a cystic fibrosis layer, in which regions occupied by Pseudomonas aeruginosa bacteria are present. To quantify the antibiotic diffusion dynamics we employ a phenomenological model using a subdiffusion-absorption equation with a fractional time derivative. This effective equation describes molecular diffusion in a medium structured akin Thompson’s plumpudding model; here the ‘pudding’ background represents the ASM and the ‘plums’ represent the bacterial biofilm. The pudding is a subdiffusion barrier for antibiotic molecules that can affect bacteria found in plums. For the experimental study we use an interferometric method to determine the time evolution of the amount of antibiotic that has diffused through the biofilm. The theoretical model shows that this function is qualitatively different depending on whether or not absorption of the antibiotic in the biofilm occurs. We show that the process can be divided into three successive stages: (1) only antibiotic subdiffusion with constant biofilm parameters, (2) subdiffusion and absorption of antibiotic molecules with variable biofilm transport parameters, (3) subdiffusion and absorption in the medium but the biofilm parameters are constant again. Stage 2 is interpreted as the appearance of an intensive defence build–up of bacteria against the action of the antibiotic, and in the stage 3 it is likely that the bacteria have been inactivated. Times at which stages change are determined from the experimentally obtained temporal evolution of the amount of antibiotic that has diffused through the ASM with bacteria. Our analysis shows good agreement between experimental and theoretical results and is consistent with the biologically expected biofilm response. We show that an experimental method to study the temporal evolution of the amount of a substance that has diffused through a biofilm is useful in studying the processes occurring in a biofilm. We also show that the complicated biological process of antibiotic diffusion in a biofilm can be described by a fractional subdiffusion-absorption equation with subdiffusion and absorption parameters that change over time.
Gender-inclusive language has evolved into a much-debated topic during the past years, discussed interdisciplinarily from theoretical to psycholinguistics, sociology, and economy – and by anyone who uses language.
Studies on German that primarily relied on questionnaires (reviewed in Braun et al. 2005), cloze tests (Klein 1988), and categorisation tasks with picture matching (Irmen & Köhncke 1996) disqualify the generically used masculine forms as pseudo-generic – failing their grammatically prescribed function to include referents of any Gender. Gender-balanced expressions (pair and split forms like Lehrer und Lehrerinnen) make explicit reference to female presence and participation, and thus elevate a more equitable interpretation.
Online methods to investigate the processing of Gender-sensitive language are surprisingly rare among research on the phenomenon, except for reaction time measures (Irmen & Köhncke 1996, Irmen & Kaczmarek 2000) and eye-tracking in reading (Irmen & Schumann 2011).
In addition, Gender-neutral language (GNL) has not been focused on in the majority of experiments, and when it was among the stimuli, results were inconclusive (De Backer & De Cuypere 2012) or found such alternatives to be ineffective (resembling masculine generics, Braun et al. 2005), despite the fact that guidelines on non-discriminatory language use commonly recommend these.
Gender-neutral (GN) expressions for personal reference in German include
• nominalised participles; nominalisations in general: Interessierte, Lehrende
• collective singulars: Publikum, Kollegium
• compounds (e.g., with a notion of “-person”): Ansprechpersonen, Lehrkräfte
• paraphrases that background a (gendered) subject: e.g., passives, relatives
In a visual world eye-tracking study, the comprehension of plural generics using masculine nouns and GN forms was tested for roles and occupations.
In complex stimulus scenarios, reference had to be established to referent images presented on a screen. At the end of each item, a question was asked in order to (re)identify the image that matched the referents of the respective setting best. Images depicted 1) a single person (protagonist), 2) an all-female group, 3) an all-male group, 4) a mixed Gender group of female and male members. The group referents were introduced with either a) masculine nouns (die Lehrer), b) female-specific feminine nouns (die Lehrerinnen), or c) one of the upper three nominal GN variants (die Lehrkräfte).
Results confirm the frequent male bias in masculine forms that are used as generics, that is, their male-specific interpretation. Furthermore, stereotypicality of nouns had an impact on responses. The GN alternatives, which are generally known to aim for indefinite reference (“marked” for Gender-fair language) were found to be most qualified to elicit mixed Gender group interpretations. When reference was established with GN terms, an inclusive response was consistently elicited. This was both indicated by eye movements and response proportions, but to a different extent depending on the particular GN noun type. Concepts that abstract from Gender in their linguistic forms (“neutralising” it) appear to be more inclusive, and thus better candidates for generic reference than masculines.
Unlike today’s prevailing terrestrial features, the geologic past of Central Asia witnessed marine environments and conditions as well. A vast, shallow sea, known as proto-Paratethys, extended across Eurasia from the Mediterranean Tethys to the Tarim Basin in western China during Cretaceous to Paleogene times. This sea formed about 160 million years ago (during Jurassic times) when the waters of the Tethys Ocean flooded into Eurasia. It drastically retreated to the west and became isolated as the Paratethys during the Late Eocene-Oligocene (ca. 34 Ma).
Having well-constrained timing and paleogeography for the Cretaceous-Paleogene proto-Paratethys sea incursions in Central Asia is essential to properly understand and distinguish the controlling mechanisms and their link to Asian paleoenvironmental and paleoclimatic change. The Cretaceous-Paleogene tectonic evolution of the Pamir and Tibet and their far-field effects play a significant role on the sedimentological and structural evolution of the Central Asian basins and on the evolution of the proto-Paratethys sea fluctuations as well. Comparing the records of the sea incursions to the tectonic and eustatic events has paramount importance to reveal the controlling mechanisms behind the sea incursions. However, due to inaccuracies in the dating of rocks (mostly continental rocks and marine rocks with benthic microfossils providing low-resolution biostratigraphic constraints) and conflicting results, there has been no consensus on the timing of the sea incursions and interpretation of their records has been in question. Here, we present a new chronostratigraphic framework based on biostratigraphy and magnetostratigraphy as well as a detailed paleoenvironmental analysis for the Cretaceous and Paleogene proto-Paratethys Sea incursions in the Tajik and Tarim basins, in Central Asia. This enables us to identify the major drivers of marine fluctuations and their potential consequences on regional and global climate, particularly Asian aridification and the global carbon cycle perturbations such as the Paleocene-Eocene Thermal Maximum (PETM). To estimate the paleogeographic evolution of the proto-Paratethys Sea, the refined age constraints and detailed paleoenvironmental interpretations are combined with successive paleogeographic maps. Regional coastlines and depositional environments during the Cretaceous-Paleogene sea advances and retreats were drawn based on the results of this thesis and integrated with existing literature to generate new paleogeographic maps.
Before its final westward retreat in the Eocene, a total of six Cretaceous and Paleogene major sea incursions have been distinguished from the sedimentary records of the Tajik and Tarim basins in Central Asia. All have been studied and documented here.
We identify the presence of marine conditions already in the Early Cretaceous in the western Tajik Basin, followed by the Cenomanian (ca. 100 Ma) and Santonian (ca. 86 Ma) major marine incursions far into the eastern Tajik and Tarim basins separated by a Turonian-Coniacian (ca. 92-86 Ma) regression. Basin-wide tectonic subsidence analyses imply that the Early Cretaceous invasion of the sea into the Tajik Basin is related to increased Pamir tectonism (at ca. 130 – 90 Ma) in a retro-arc basin setting inferred to be linked to collision and subduction. This tectonic event mainly governed the Cenomanian (ca. 100 Ma) sea incursion in conjunction with a coeval global eustatic high resulting in the maximum geographic extent of the sea. The following Turonian-Coniacian (ca. 92-86 Ma) major regression, driven by eustasy, coincides with a sharp slowdown in tectonic subsidence related to a regime change in Pamir tectonism from compression to extension. The Santonian (ca. 86 Ma) major sea incursion was more likely controlled dominantly by eustasy as also evidenced by the coeval fluctuations in the west Siberian Basin. During the early Maastrichtian, the global Late Cretaceous cooling is inferred from the disappearance of mollusk-rich limestones and the dominance of bryozoan-rich and echinoderm-rich limestones in the Tajik Basin documenting the first evidence for the Late Cretaceous cooling event in Central Asia.
Following the last Cretaceous sea incursion, a major regional restriction event, marked by the exceptionally thick (≤ 400 m) shelf evaporites is assigned a Danian-Selandian age (ca. 63-59 Ma). This is followed by the largest recorded proto-Paratethys sea incursion with a transgression estimated as early Thanetian (ca. 59-57 Ma) and a regression within the Ypresian (ca. 53-52 Ma). The transgression of the next incursion is now constrained as early Lutetian (ca. 47-46 Ma), whereas its regression is constrained as late Lutetian (ca. 41 Ma) and is associated with a drastic increase in both tectonic subsidence and basin infilling. The age of the final and least pronounced sea incursion restricted to the westernmost margin of the Tarim Basin is assigned as Bartonian–Priabonian (ca. 39.7-36.7 Ma). We interpret the long-term westward retreat of the proto-Paratethys Sea starting at ca. 41 Ma to be associated with far-field tectonic effects of the Indo-Asia collision and Pamir/Tibetan plateau uplift. Short-term eustatic sea level transgressions are superimposed on this long-term regression and seem coeval with the transgression events in the other northern Peri-Tethyan sedimentary provinces for the 1st and 2nd Paleogene sea incursions. However, the last Paleogene sea incursion is interpreted as related to tectonism. The transgressive and regressive intervals of the proto-Paratethys Sea correlate well with the reported humid and arid phases, respectively in the Qaidam and Xining basins, thus demonstrating the role of the proto-Paratethys Sea as an important moisture source for the Asian interior and its regression as a contributor to Asian aridification.
We lastly study the mechanics, relative contribution and preservation efficiency of ancient epicontinental seas as carbon sinks with new and existing data, using organic rich (sapropel) deposits dated to the PETM from the extensive epicontinental proto-Paratethys and West Siberian seas. We estimate ca. 1390±230 Gt organic C burial, a substantial amount compared to previously estimated global total excess organic C burial (ca. 1700-2900 Gt) is focused in the proto-Paratethys and West Siberian seas alone. We also speculate that enhanced organic carbon burial later over much of the proto-Paratethys (and later Paratethys) basin (during the deposition of the Kuma Formation and Maikop series, repectively) may have majorly contributed to drawdown of atmospheric carbon dioxide before and during the EOT cooling and glaciation of Antarctica. For past periods with smaller epicontinental seas, the effectiveness of this negative carbon cycle feedback was arguably diminished, and the same likely applies to the present-day.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
Engineering biotechnological microorganisms to use methanol as a feedstock for bioproduction is a major goal for the synthetic metabolism community. Here, we aim to redesign the natural serine cycle for implementation in E. coli. We propose the homoserine cycle, relying on two promiscuous formaldehyde aldolase reactions, as a superior pathway design. The homoserine cycle is expected to outperform the serine cycle and its variants with respect to biomass yield, thermodynamic favorability, and integration with host endogenous metabolism. Even as compared to the RuMP cycle, the most efficient naturally occurring methanol assimilation route, the homoserine cycle is expected to support higher yields of a wide array of products. We test the in vivo feasibility of the homoserine cycle by constructing several E. coli gene deletion strains whose growth is coupled to the activity of different pathway segments. Using this approach, we demonstrate that all required promiscuous enzymes are active enough to enable growth of the auxotrophic strains. Our findings thus identify a novel metabolic solution that opens the way to an optimized methylotrophic platform.
In this study we examine the tonal organization of a series of recordings of liturgical chants, sung in 1966 by the Georgian master singer Artem Erkomaishvili. This dataset is the oldest corpus of Georgian chants from which the time synchronous F0-trajectories for all three voices have been reliably determined (Müller et al. 2017). It is therefore of outstanding importance for the understanding of the tuning principles of traditional Georgian vocal music.
The aim of the present study is to use various computational methods to analyze what these recordings can contribute to the ongoing scientific dispute about traditional Georgian tuning systems. Starting point for the present analysis is the re-release of the original audio data together with estimated fundamental frequency (F0) trajectories for each of the three voices, beat annotations, and digital scores (Rosenzweig et al. 2020). We present synoptic models for the pitch and the harmonic interval distributions, which are the first of such models for which the complete Erkomaishvili dataset was used. We show that these distributions can be very compactly be expressed as Gaussian mixture models, anchored on discrete sets of pitch or interval values for the pitch and interval distributions, respectively. As part of our study we demonstrate that these pitch values, which we refer to as scale pitches, and which are determined as the mean values of the Gaussian mixture elements, define the scale degrees of the melodic sound scales which build the skeleton of Artem Erkomaishvili’s intonation. The observation of consistent pitch bending of notes in melodic phrases, which appear in identical form in a group of chants, as well as the observation of harmonically driven intonation adjustments, which are clearly documented for all pure harmonic intervals, demonstrate that Artem Erkomaishvili intentionally deviates from the scale pitch skeleton quite freely. As a central result of our study, we proof that this melodic freedom is always constrained by the attracting influence of the scale pitches. Deviations of the F0-values of individual note events from the scale pitches at one instance of time are compensated for in the subsequent melodic steps. This suggests a deviation-compensation mechanism at the core of Artem Erkomaishvili’s melody generation, which clearly honors the scales but still allows for a large degree of melodic flexibility. This model, which summarizes all partial aspects of our analysis, is consistent with the melodic scale models derived from the observed pitch distributions, as well as with the melodic and harmonic interval distributions. In addition to the tangible results of our work, we believe that our work has general implications for the determination of tuning models from audio data, in particular for non-tempered music.
Formaldehyde is a highly reactive compound that participates in multiple spontaneous reactions, but these are mostly deleterious and damage cellular components. In contrast, the spontaneous condensation of formaldehyde with tetrahydrofolate (THF) has been proposed to contribute to the assimilation of this intermediate during growth on C1 carbon sources such as methanol. However, the in vivo rate of this condensation reaction is unknown and its possible contribution to growth remains elusive. Here, we used microbial platforms to assess the rate of this condensation in the cellular environment. We constructed Escherichia coli strains lacking the enzymes that naturally produce 5,10-methylene-THF. These strains were able to grow on minimal medium only when equipped with a sarcosine (N-methyl-glycine) oxidation pathway that sustained a high cellular concentration of formaldehyde, which spontaneously reacts with THF to produce 5,10-methylene-THF. We used flux balance analysis to derive the rate of the spontaneous condensation from the observed growth rate. According to this, we calculated that a microorganism obtaining its entire biomass via the spontaneous condensation of formaldehyde with THF would have a doubling time of more than three weeks. Hence, this spontaneous reaction is unlikely to serve as an effective route for formaldehyde assimilation.
Current contestations of the liberal international order stand in notable contrast with the earlier rise of international law during the post-cold war period. As Krieger and Liese argue, this situation calls for assessment of the type of change that is currently observed, i.e. norm change (Wandel) or a more fundamental transformation of international law – a metamorphosis (Verwandlung)? To address this question, this paper details the bi-focal approach to norms in order to reflect and take account of the complex interrelation between fact-based and value-based conceptions of norms. The paper is organised in three sections. The first section presents three axioms underlying the conceptual framework to study norm(ative) change which are visualised by a triangular operation to analyse this change in relation with practices and norms. The second section recalls three key interests that have guided IR norms research after the return to norms in the late 1980s. They include, first, allocating change in and through practice, second, identifying behavioural change with reference to norm- following, and third, identifying norm(ative) change with reference to discursive practice. The third section presents the two analytical tools of the conceptual frame, namely, the norm-typology and the cycle-grid model. It also indicates how to apply these tools with reference to illustrative case scenarios. The conclusion recalls the key elements of the conceptual framework for research on norm(ative) change in international relations in light of the challenge of establishing sustainable normativity in the global order.
The WTO’s Crisis
(2020)
The perception of the WTO is currently one of an organisation in crisis. Yet, appraisal varies regarding its extent and seriousness: Is it merely a rough time or are we standing on the edge of destruction? The article will trace developments inside as well as outside the WTO in order to assess the magnitude of the crisis. It will be argued that while certain developments inside the organisation, when seen in accumulation would already warrant serious attention, only together with developments taking place outside of the WTO, the two strands of developments unfold their full potential for the crisis. The overall situation renders the WTO in a difficult position, as it is currently unable to adapt to these challenges, while keeping calm and carrying on might similarly further the crisis. While States might improve and further develop their trade relations in bi- and plurilateral agreements, it is only the WTO that reflects and stands for the multilateral post (cold) war order.
The guarantee of judicial independence is undoubtedly one of the most important institutional design features of international courts and tribunals. An independence deficit can adversely impact a court’s authority, create a crisis of legitimacy, and undermine the very effectiveness of an international court or tribunal. It can hardly be denied that for an international court to be considered legitimate, a basic degree of independence is a must. An independent judiciary is a precondition to the fair and just resolution of legal disputes. In the context of interstate dispute settlement where the jurisdiction of courts is based on the principle of consent, in the absence of a basic degree of judicial independence, states may not be willing to submit to the jurisdiction of international courts. Comparing and contrasting the International Court of Justice and the Appellate Body of the World Trade Organisation, I assess whether those international judicial mechanisms possess the basic degree of independence required for a court to be able to maintain its credibility so that it can continue to perform its core function of adjudicating interstate disputes. With both those interstate adjudicative bodies constituting the two leading international courts in terms of participation and the sheer number of cases decided, much may be learned from comparing them. I argue there is a case for bolstering the independence of the ICJ; and without immediate reforms to the Appellate Body’s institutional design, its recent demise may become permanent. I conclude that if a basic degree of judicial independence cannot be guaranteed, it is preferable to let a court vanish for a while than to maintain a significantly deficient one.
For the United States the ‘international law of global security’ is, in a unique sense, synonymous with the entire project of constructing global legal order. Uniquely preponderant power enjoyed since the end of the Second World War has allowed US preferences to manifest not merely in specific rules and regimes, but in purposive development of the entire structure of global legal order to favour American security interests. Perceptions of a recent decline in this order now find expression in advocacy for a ‘liberal’ or ‘rules-based’ international order, as the claimed foundation for global prosperity and security. This working paper seeks to map out the parameters of US contributions to the global security order by uncovering the strategic and political foundations of its engagement with the international law of global security. The paper begins by reflecting on competing US conceptions of the relationship between national security and global order as they evolved across the twentieth century. The focus then turns to three significant trends defining the contemporary field. First are US attitudes toward multilateral institutions and global security, and the ongoing contest between beliefs that they are mutually reinforcing versus beliefs that US security and global institutions sit in zero-sum opposition. Second is the impact of the generational ‘War on Terror’, which has yielded more permissive interpretation and development of laws governing the global use of violence. The final trend is that towards competitive geopolitical interests restructuring international law, which are evident across diverse areas ranging from global economics, to cybersecurity, to the fragmentation of global order into spheres of influence. Looking ahead, a confluence of rising geopolitical competitors with divergent legal conceptions, and conflicted domestic support for the legitimacy and desirability of US global leadership, emerge as leading forces already reshaping the global security order.
Starch and Glycogen Analyses
(2020)
For complex carbohydrates, such as glycogen and starch, various analytical methods and techniques exist allowing the detailed characterization of these storage carbohydrates. In this article, we give a brief overview of the most frequently used methods, techniques, and results. Furthermore, we give insights in the isolation, purification, and fragmentation of both starch and glycogen. An overview of the different structural levels of the glucans is given and the corresponding analytical techniques are discussed. Moreover, future perspectives of the analytical needs and the challenges of the currently developing scientific questions are included
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
Economic impact of clinical decision support interventions based on electronic health records
(2020)
Background
Unnecessary healthcare utilization, non-adherence to current clinical guidelines, or insufficient personalized care are perpetual challenges and remain potential major cost-drivers for healthcare systems around the world. Implementing decision support systems into clinical care is promised to improve quality of care and thereby yield substantial effects on reducing healthcare expenditure. In this article, we evaluate the economic impact of clinical decision support (CDS) interventions based on electronic health records (EHR).
Methods
We searched for studies published after 2014 using MEDLINE, CENTRAL, WEB OF SCIENCE, EBSCO, and TUFTS CEA registry databases that encompass an economic evaluation or consider cost outcome measures of EHR based CDS interventions. Thereupon, we identified best practice application areas and categorized the investigated interventions according to an existing taxonomy of front-end CDS tools.
Results and discussion
Twenty-seven studies are investigated in this review. Of those, twenty-two studies indicate a reduction of healthcare expenditure after implementing an EHR based CDS system, especially towards prevalent application areas, such as unnecessary laboratory testing, duplicate order entry, efficient transfusion practice, or reduction of antibiotic prescriptions. On the contrary, order facilitators and undiscovered malfunctions revealed to be threats and could lead to new cost drivers in healthcare. While high upfront and maintenance costs of CDS systems are a worldwide implementation barrier, most studies do not consider implementation cost. Finally, four included economic evaluation studies report mixed monetary outcome results and thus highlight the importance of further high-quality economic evaluations for these CDS systems.
Conclusion
Current research studies lack consideration of comparative cost-outcome metrics as well as detailed cost components in their analyses. Nonetheless, the positive economic impact of EHR based CDS interventions is highly promising, especially with regard to reducing waste in healthcare.
Perovskite solar cells have become one of the most studied systems in the quest for new, cheap and efficient solar cell materials. Within a decade device efficiencies have risen to >25% in single-junction and >29% in tandem devices on top of silicon. This rapid improvement was in many ways fortunate, as e. g. the energy levels of commonly used halide perovskites are compatible with already existing materials from other photovoltaic technologies such as dye-sensitized or organic solar cells. Despite this rapid success, fundamental working principles must be understood to allow concerted further improvements. This thesis focuses on a comprehensive understanding of recombination processes in functioning devices.
First the impact the energy level alignment between the perovskite and the electron transport layer based on fullerenes is investigated. This controversial topic is comprehensively addressed and recombination is mitigated through reducing the energy difference between the perovskite conduction band minimum and the LUMO of the fullerene. Additionally, an insulating blocking layer is introduced, which is even more effective in reducing this recombination, without compromising carrier collection and thus efficiency. With the rapid efficiency development (certified efficiencies have broken through the 20% ceiling) and thousands of researchers working on perovskite-based optoelectronic devices, reliable protocols on how to reach these efficiencies are lacking. Having established robust methods for >20% devices, while keeping track of possible pitfalls, a detailed description of the fabrication of perovskite solar cells at the highest efficiency level (>20%) is provided. The fabrication of low-temperature p-i-n structured devices is described, commenting on important factors such as practical experience, processing atmosphere & temperature, material purity and solution age. Analogous to reliable fabrication methods, a method to identify recombination losses is needed to further improve efficiencies. Thus, absolute photoluminescence is identified as a direct way to quantify the Quasi-Fermi level splitting of the perovskite absorber (1.21eV) and interfacial recombination losses the transport layers impose, reducing the latter to ~1.1eV. Implementing very thin interlayers at both the p- and n-interface (PFN-P2 and LiF, respectively), these losses are suppressed, enabling a VOC of up to 1.17eV. Optimizing the device dimensions and the bandgap, 20% devices with 1cm2 active area are demonstrated. Another important consideration is the solar cells’ stability if subjected to field-relevant stressors during operation. In particular these are heat, light, bias or a combination thereof. Perovskite layers – especially those incorporating organic cations – have been shown to degrade if subjected to these stressors. Keeping in mind that several interlayers have been successfully used to mitigate recombination losses, a family of perfluorinated self-assembled monolayers (X-PFCn, where X denotes I/Br and n = 7-12) are introduced as interlayers at the n-interface. Indeed, they reduce interfacial recombination losses enabling device efficiencies up to 21.3%. Even more importantly they improve the stability of the devices. The solar cells with IPFC10 are stable over 3000h stored in the ambient and withstand a harsh 250h of MPP at 85◦C without appreciable efficiency losses. To advance further and improve device efficiencies, a sound understanding of the photophysics of a device is imperative. Many experimental observations in recent years have however drawn an inconclusive picture, often suffering from technical of physical impediments, disguising e. g. capacitive discharge as recombination dynamics. To circumvent these obstacles, fully operational, highly efficient perovskites solar cells are investigated by a combination of multiple optical and optoelectronic probes, allowing to draw a conclusive picture of the recombination dynamics in operation. Supported by drift-diffusion simulations, the device recombination dynamics can be fully described by a combination of first-, second- and third-order recombination and JV curves as well as luminescence efficiencies over multiple illumination intensities are well described within the model. On this basis steady state carrier densities, effective recombination constants, densities-of-states and effective masses are calculated, putting the devices at the brink of the radiative regime. Moreover, a comprehensive review of recombination in state-of-the-art devices is given, highlighting the importance of interfaces in nonradiative recombination. Different strategies to assess these are discussed, before emphasizing successful strategies to reduce interfacial recombination and pointing towards the necessary steps to further improve device efficiency and stability. Overall, the main findings represent an advancement in understanding loss mechanisms in highly efficient solar cells. Different reliable optoelectronic techniques are used and interfacial losses are found to be of grave importance for both efficiency and stability. Addressing the interfaces, several interlayers are introduced, which mitigate recombination losses and degradation.
The field of gamma-ray astronomy opened a new window into the non-thermal universe that allows studying the acceleration sites of cosmic rays and the role of cosmic rays on evolutionary processes in galaxies. The detection of almost one hundred Galactic very-high-energy (VHE: 0.1−100TeV) gamma-ray sources in the Milky Way demonstrates that particle acceleration up to tens of TeV energies is a common phenomenon. Furthermore, the detection of VHE gamma rays from other galaxies has confirmed that cosmic rays are not exclusively accelerated in the Milky Way. The rapid development of gamma-ray astronomy in the past two decades has led to a transition from the detection and study of individual sources to source population studies. To answer the question, whether the VHE gamma-ray source population of the Milky Way is unique, observations of galaxies, for which individual sources can be resolved, are required. Such galaxies are the Magellanic Clouds, two satellite galaxies of the Milky Way, which have been surveyed by the H.E.S.S. experiment in the last decade. In this thesis, data from a total of 450 hours of H.E.S.S. observations towards the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are presented. During the analysis of the data sets, special emphasis is put on the evaluation of systematic uncertainties of the experiment in order to assure an unbiased flux estimation of the potential VHE gamma-ray sources of the Magellanic Clouds. A detailed analysis of the survey data revealed the detection of the gamma-ray binary LMCP3, the most powerful gamma-ray binary known so far, that is located in the LMC, and thus, increases the number of known VHE gamma-ray sources in the LMC to four. No other VHE gamma-ray source is detected in the Magellanic Clouds and integral flux upper limits are estimated. These flux upper limits are used to perform a source population study based on known VHE source classes and existing multi-wavelength catalogues. A comparison of the source populations of the Magellanic Clouds and the Milky Way revealed that no other source in the Magellanic Clouds is as bright as the most luminous VHE gamma-ray source in the LMC: the pulsar wind nebula N 157B, and that one-third of the source population of the Magellanic Clouds is less luminous than the other known VHE gamma-ray sources in the LMC. For only a couple of sources luminosity levels of Galactic VHE sources, that are more than one order of magnitude fainter than the detected sources in the LMC, are constrained. Based on the flux upper limits, differences on the TeV source populations in the Magellanic Clouds and the Milky Way as well as the importance of the source environments will be discussed.
Abiotic stresses cause oxidative damage in plants. Here, we demonstrate that foliar application of an extract from the seaweed Ascophyllum nodosum, SuperFifty (SF), largely prevents paraquat (PQ)-induced oxidative stress in Arabidopsis thaliana. While PQ-stressed plants develop necrotic lesions, plants pre-treated with SF (i.e., primed plants) were unaffected by PQ. Transcriptome analysis revealed induction of reactive oxygen species (ROS) marker genes, genes involved in ROS-induced programmed cell death, and autophagy-related genes after PQ treatment. These changes did not occur in PQ-stressed plants primed with SF. In contrast, upregulation of several carbohydrate metabolism genes, growth, and hormone signaling as well as antioxidant-related genes were specific to SF-primed plants. Metabolomic analyses revealed accumulation of the stress-protective metabolite maltose and the tricarboxylic acid cycle intermediates fumarate and malate in SF-primed plants. Lipidome analysis indicated that those lipids associated with oxidative stress-induced cell death and chloroplast degradation, such as triacylglycerols (TAGs), declined upon SF priming. Our study demonstrated that SF confers tolerance to PQ-induced oxidative stress in A. thaliana, an effect achieved by modulating a range of processes at the transcriptomic, metabolic, and lipid levels.