Refine
Has Fulltext
- yes (587) (remove)
Year of publication
- 2019 (587) (remove)
Document Type
- Postprint (211)
- Article (142)
- Doctoral Thesis (130)
- Working Paper (35)
- Monograph/Edited Volume (19)
- Part of Periodical (15)
- Master's Thesis (11)
- Review (10)
- Bachelor Thesis (3)
- Conference Proceeding (3)
Language
- English (395)
- German (182)
- Spanish (5)
- French (4)
- Portuguese (1)
Keywords
- morphology (32)
- Informationsstruktur (30)
- Morphologie (30)
- information structure (30)
- linguistics (30)
- syntax (30)
- Festschrift (29)
- Linguistik (29)
- Syntax (29)
- festschrift (29)
Institute
- Department Linguistik (49)
- Institut für Biochemie und Biologie (48)
- Institut für Geowissenschaften (36)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Institut für Chemie (31)
- Institut für Physik und Astronomie (29)
- MenschenRechtsZentrum (29)
- Department Erziehungswissenschaft (27)
- Institut für Romanistik (27)
- Strukturbereich Kognitionswissenschaften (24)
The semiarid northeast of Brazil is one of the most densely populated dryland regions in the world and recurrently affected by severe droughts. Thus, reliable seasonal forecasts of streamflow and reservoir storage are of high value for water managers. Such forecasts can be generated by applying either hydrological models representing underlying processes or statistical relationships exploiting correlations among meteorological and hydrological variables. This work evaluates and compares the performances of seasonal reservoir storage forecasts derived by a process-based hydrological model and a statistical approach.
Driven by observations, both models achieve similar simulation accuracies. In a hindcast experiment, however, the accuracy of estimating regional reservoir storages was considerably lower using the process-based hydrological model, whereas the resolution and reliability of drought event predictions were similar by both approaches. Further investigations regarding the deficiencies of the process-based model revealed a significant influence of antecedent wetness conditions and a higher sensitivity of model prediction performance to rainfall forecast quality.
Within the scope of this study, the statistical model proved to be the more straightforward approach for predictions of reservoir level and drought events at regionally and monthly aggregated scales. However, for forecasts at finer scales of space and time or for the investigation of underlying processes, the costly initialisation and application of a process-based model can be worthwhile. Furthermore, the application of innovative data products, such as remote sensing data, and operational model correction methods, like data assimilation, may allow for an enhanced exploitation of the advanced capabilities of process-based hydrological models.
Das Ziel dieser Arbeit ist es, das Potenzial lateinamerikanischer Telenovelas als audiovisuelle Medien für den Fremdsprachenunterricht Spanisch und deren Eignung für die Förderung der funktionalen kommunikativen Kompetenz, der interkulturellen kommunikativen Kompetenz und der Medienkompetenz aufzuzeigen. Der Schwerpunkt dieser Untersuchung liegt dabei im lateinamerikanischen Spanisch und der lateinamerikanischen Kultur. Zu diesem Zweck werden ausgewählte Szenen aus der weltweit populärsten kolumbianischen Telenovela Yo soy Betty, la fea analysiert. Es werden zwei Szenen aus dem Arbeitsleben und zwei Szenen aus dem Privatleben in Hinblick auf die funktionale kommunikative Kompetenz untersucht. Des Weiteren werden die kulturellen Elemente in dieser Telenovela während der Analyse geschildert. Schließlich wird die Telenovela mit dem Fokus auf die Medienkompetenz, daher auf die Eigenschaften und Konventionen des Genres Telenovela, hin untersucht.
Solar wind observations show that geomagnetic storms are mainly driven by interplanetary coronal mass ejections (ICMEs) and corotating or stream interaction regions (C/SIRs). We present a binary classifier that assigns one of these drivers to 7,546 storms between 1930 and 2015 using ground‐based geomagnetic field observations only. The input data consists of the long‐term stable Hourly Magnetospheric Currents index alongside the corresponding midlatitude geomagnetic observatory time series. This data set provides comprehensive information on the global storm time magnetic disturbance field, particularly its spatial variability, over eight solar cycles. For the first time, we use this information statistically with regard to an automated storm driver identification. Our supervised classification model significantly outperforms unskilled baseline models (78% accuracy with 26[19]% misidentified interplanetary coronal mass ejections [corotating or stream interaction regions]) and delivers plausible driver occurrences with regard to storm intensity and solar cycle phase. Our results can readily be used to advance related studies fundamental to space weather research, for example, studies connecting galactic cosmic ray modulation and geomagnetic disturbances. They are fully reproducible by means of the underlying open‐source software (Pick, 2019, http://doi.org/10.5880/GFZ.2.3.2019.003)
Synchronization – the adjustment of rhythms among coupled self-oscillatory systems – is a fascinating dynamical phenomenon found in many biological, social, and technical systems.
The present thesis deals with synchronization in finite ensembles of weakly coupled self-sustained oscillators with distributed frequencies.
The standard model for the description of this collective phenomenon is the Kuramoto model – partly due to its analytical tractability in the thermodynamic limit of infinitely many oscillators. Similar to a phase transition in the thermodynamic limit, an order parameter indicates the transition from incoherence to a partially synchronized state. In the latter, a part of the oscillators rotates at a common frequency. In the finite case, fluctuations occur, originating from the quenched noise of the finite natural frequency sample.
We study intermediate ensembles of a few hundred oscillators in which fluctuations are comparably strong but which also allow for a comparison to frequency distributions in the infinite limit.
First, we define an alternative order parameter for the indication of a collective mode in the finite case. Then we test the dependence of the degree of synchronization and the mean rotation frequency of the collective mode on different characteristics for different coupling strengths.
We find, first numerically, that the degree of synchronization depends strongly on the form (quantified by kurtosis) of the natural frequency sample and the rotation frequency of the collective mode depends on the asymmetry (quantified by skewness) of the sample. Both findings are verified in the infinite limit.
With these findings, we better understand and generalize observations of other authors. A bit aside of the general line of thoughts, we find an analytical expression for the volume contraction in phase space.
The second part of this thesis concentrates on an ordering effect of the finite-size fluctuations. In the infinite limit, the oscillators are separated into coherent and incoherent thus ordered and disordered oscillators. In finite ensembles, finite-size fluctuations can generate additional order among the asynchronous oscillators. The basic principle – noise-induced synchronization – is known from several recent papers. Among coupled oscillators, phases are pushed together by the order parameter fluctuations, as we on the one hand show directly and on the other hand quantify with a synchronization measure from directed statistics between pairs of passive oscillators.
We determine the dependence of this synchronization measure from the ratio of pairwise natural frequency difference and variance of the order parameter fluctuations. We find a good agreement with a simple analytical model, in which we replace the deterministic fluctuations of the order parameter by white noise.
With the growth of information technology, patient attitudes are shifting – away from passively receiving care towards actively taking responsibility for their well- being. Handling doctor-patient relationships collaboratively and providing patients access to their health information are crucial steps in empowering patients. In mental healthcare, the implicit consensus amongst practitioners has been that sharing medical records with patients may have an unpredictable, harmful impact on clinical practice. In order to involve patients more actively in mental healthcare processes, Tele-Board MED (TBM) allows for digital collaborative documentation in therapist-patient sessions. The TBM software system offers a whiteboard-inspired graphical user interface that allows therapist and patient to jointly take notes during the treatment session. Furthermore, it provides features to automatically reuse the digital treatment session notes for the creation of treatment session summaries and clinical case reports. This thesis presents the development of the TBM system and evaluates its effects on 1) the fulfillment of the therapist’s duties of clinical case documentation, 2) patient engagement in care processes, and 3) the therapist-patient relationship. Following the design research methodology, TBM was developed and tested in multiple evaluation studies in the domains of cognitive behavioral psychotherapy and addiction care. The results show that therapists are likely to use TBM with patients if they have a technology-friendly attitude and when its use suits the treatment context. Support in carrying out documentation duties as well as fulfilling legal requirements contributes to therapist acceptance. Furthermore, therapists value TBM as a tool to provide a discussion framework and quick access to worksheets during treatment sessions. Therapists express skepticism, however, regarding technology use in patient sessions and towards complete record transparency in general. Patients expect TBM to improve the communication with their therapist and to offer a better recall of discussed topics when taking a copy of their notes home after the session. Patients are doubtful regarding a possible distraction of the therapist and usage in situations when relationship-building is crucial. When applied in a clinical environment, collaborative note-taking with TBM encourages patient engagement and a team feeling between therapist and patient. Furthermore, it increases the patient’s acceptance of their diagnosis, which in turn is an important predictor for therapy success. In summary, TBM has a high potential to deliver more than documentation support and record transparency for patients, but also to contribute to a collaborative doctor-patient relationship. This thesis provides design implications for the development of digital collaborative documentation systems in (mental) healthcare as well as recommendations for a successful implementation in clinical practice.
The size structure of autotroph communities – the relative abundance of small vs. large individuals – shapes the functioning of ecosystems. Whether common mechanisms underpin the size structure of unicellular and multicellular autotrophs is, however, unknown. Using a global data compilation, we show that individual body masses in tree and phytoplankton communities follow power-law distributions and that the average exponents of these individual size distributions (ISD) differ. Phytoplankton communities are characterized by an average ISD exponent consistent with three-quarter-power scaling of metabolism with body
mass and equivalence in energy use among mass classes. Tree communities deviate from this pattern in a manner consistent with equivalence in energy use among diameter size classes. Our findings suggest that whilst universal metabolic constraints ultimately underlie the emergent size structure of autotroph communities, divergent aspects of body size (volumetric vs. linear dimensions) shape the ecological outcome of metabolic scaling in forest vs. pelagic ecosystems.
In this paper Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular (and some singular) SLPs of even orders (tested for up to eight), with a mix of (including non-separable and finite singular endpoints) boundary conditions, accurately and efficiently. The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved. The inverse SLP algorithm proposed by Barcilon (1974) is utilized in combination with the Magnus method so that a direct SLP of any (even) order and an inverse SLP of order two can be solved effectively.
This paper – which is based on the Thomas Franck Lecture held by the author at Humboldt University Berlin on 13 May 2019 – argues that the most likely development of international to be expected will be the coexistence of two “legal worlds”. On the one hand, an inter-State law brutally regulating political relations between human groups whitewashed by nationalism; on the other hand, a transnational or “a-national” law regulating economic relations between private as well as public interests. Further, the paper argues that there are two obvious victims – of very different nature – of this foreseeable evolution: the human being on the one hand, the certainty and effectiveness of the rule of law itself on the other hand.
Introduction
To date, several meta-analyses clearly demonstrated that resistance and plyometric training are effective to improve physical fitness in children and adolescents. However, a methodological limitation of meta-analyses is that they synthesize results from different studies and hence ignore important differences across studies (i.e., mixing apples and oranges). Therefore, we aimed at examining comparative intervention studies that assessed the effects of age, sex, maturation, and resistance or plyometric training descriptors (e.g., training intensity, volume etc.) on measures of physical fitness while holding other variables constant.
Methods
To identify relevant studies, we systematically searched multiple electronic databases (e.g., PubMed) from inception to March 2018. We included resistance and plyometric training studies in healthy young athletes and non-athletes aged 6 to 18 years that investigated the effects of moderator variables (e.g., age, maturity, sex, etc.) on components of physical fitness (i.e., muscle strength and power).
Results
Our systematic literature search revealed a total of 75 eligible resistance and plyometric training studies, including 5,138 participants. Mean duration of resistance and plyometric training programs amounted to 8.9 ± 3.6 weeks and 7.1±1.4 weeks, respectively. Our findings showed that maturation affects plyometric and resistance training outcomes differently, with the former eliciting greater adaptations pre-peak height velocity (PHV) and the latter around- and post-PHV. Sex has no major impact on resistance training related outcomes (e.g., maximal strength, 10 repetition maximum). In terms of plyometric training, around-PHV boys appear to respond with larger performance improvements (e.g., jump height, jump distance) compared with girls. Different types of resistance training (e.g., body weight, free weights) are effective in improving measures of muscle strength (e.g., maximum voluntary contraction) in untrained children and adolescents. Effects of plyometric training in untrained youth primarily follow the principle of training specificity. Despite the fact that only 6 out of 75 comparative studies investigated resistance or plyometric training in trained individuals, positive effects were reported in all 6 studies (e.g., maximum strength and vertical jump height, respectively).
Conclusions
The present review article identified research gaps (e.g., training descriptors, modern alternative training modalities) that should be addressed in future comparative studies.
Genetic divergence is impacted by many factors, including phylogenetic history, gene flow, genetic drift, and divergent selection. Rotifers are an important component of aquatic ecosystems, and genetic variation is essential to their ongoing adaptive diversification and local adaptation. In addition to coding sequence divergence, variation in gene expression may relate to variable heat tolerance, and can impose ecological barriers within species. Temperature plays a significant role in aquatic ecosystems by affecting species abundance, spatio-temporal distribution, and habitat colonization. Recently described (formerly cryptic) species of the Brachionus calyciflorus complex exhibit different temperature tolerance both in natural and in laboratory studies, and show that B. calyciflorus sensu stricto (s.s.) is a thermotolerant species. Even within B. calyciflorus s.s., there is a tendency for further temperature specializations. Comparison of expressed genes allows us to assess the impact of stressors on both expression and sequence divergence among disparate populations within a single species. Here, we have used RNA-seq to explore expressed genetic diversity in B. calyciflorus s.s. in two mitochondrial DNA lineages with different phylogenetic histories and differences in thermotolerance. We identify a suite of candidate genes that may underlie local adaptation, with a particular focus on the response to sustained high or low temperatures. We do not find adaptive divergence in established candidate genes for thermal adaptation. Rather, we detect divergent selection among our two lineages in genes related to metabolism (lipid metabolism, metabolism of xenobiotics).
For both Lévy flight and Lévy walk search processes we analyse the full distribution of first-passage and first-hitting (or first-arrival) times. These are, respectively, the times when the particle moves across a point at some given distance from its initial position for the first time, or when it lands at a given point for the first time. For Lévy motions with their propensity for long relocation events and thus the possibility to jump across a given point in space without actually hitting it ('leapovers'), these two definitions lead to significantly different results. We study the first-passage and first-hitting time distributions as functions of the Lévy stable index, highlighting the different behaviour for the cases when the first absolute moment of the jump length distribution is finite or infinite. In particular we examine the limits of short and long times. Our results will find their application in the mathematical modelling of random search processes as well as computer algorithms.
Historical biogeography of the leopard (Panthera pardus) and its extinct Eurasian populations
(2019)
Background
Resolving the historical biogeography of the leopard (Panthera pardus) is a complex issue, because patterns inferred from fossils and from molecular data lack congruence. Fossil evidence supports an African origin, and suggests that leopards were already present in Eurasia during the Early Pleistocene. Analysis of DNA sequences however, suggests a more recent, Middle Pleistocene shared ancestry of Asian and African leopards. These contrasting patterns led researchers to propose a two-stage hypothesis of leopard dispersal out of Africa: an initial Early Pleistocene colonisation of Asia and a subsequent replacement by a second colonisation wave during the Middle Pleistocene. The status of Late Pleistocene European leopards within this scenario is unclear: were these populations remnants of the first dispersal, or do the last surviving European leopards share more recent ancestry with their African counterparts?
Results
In this study, we generate and analyse mitogenome sequences from historical samples that span the entire modern leopard distribution, as well as from Late Pleistocene remains. We find a deep bifurcation between African and Eurasian mitochondrial lineages (~ 710 Ka), with the European ancient samples as sister to all Asian lineages (~ 483 Ka). The modern and historical mainland Asian lineages share a relatively recent common ancestor (~ 122 Ka), and we find one Javan sample nested within these.
Conclusions
The phylogenetic placement of the ancient European leopard as sister group to Asian leopards suggests that these populations originate from the same out-of-Africa dispersal which founded the Asian lineages. The coalescence time found for the mitochondrial lineages aligns well with the earliest undisputed fossils in Eurasia, and thus encourages a re-evaluation of the identification of the much older putative leopard fossils from the region. The relatively recent ancestry of all mainland Asian leopard lineages suggests that these populations underwent a severe population bottleneck during the Pleistocene. Finally, although only based on a single sample, the unexpected phylogenetic placement of the Javan leopard could be interpreted as evidence for exchange of mitochondrial lineages between Java and mainland Asia, calling for further investigation into the evolutionary history of this subspecies.
Molecularly imprinted polymers (MIPs) mimic the binding sites of antibodies by substituting the amino acid-scaffold of proteins by synthetic polymers. In this work, the first MIP for the recognition of the diagnostically relevant enzyme butyrylcholinesterase (BuChE) is presented. The MIP was prepared using electropolymerization of the functional monomer o-phenylenediamine and was deposited as a thin film on a glassy carbon electrode by oxidative potentiodynamic polymerization. Rebinding and removal of the template were detected by cyclic voltammetry using ferricyanide as a redox marker. Furthermore, the enzymatic activity of BuChE rebound to the MIP was measured via the anodic oxidation of thiocholine, the reaction product of butyrylthiocholine. The response was linear between 50 pM and 2 nM concentrations of BuChE with a detection limit of 14.7 pM. In addition to the high sensitivity for BuChE, the sensor responded towards pseudo-irreversible inhibitors in the lower mM range.
C-Arylglykoside und Chalkone
(2019)
Im bis heute andauernden Zeitalter der wissenschaftlichen Medizin, konnte ein breites Spektrum von Wirkstoffen zur Behandlung diverser Krankheiten zusammengetragen werden. Dennoch hat es sich die organische Synthesechemie zur Aufgabe gemacht, dieses Spektrum auf neuen oder bekannten Wegen und aus verschiedenen Gründen zu erweitern. Zum einen ist das Vorkommen bestimmter Verbindungen in der Natur häufig limitiert, sodass synthetische Methoden immer öfter an Stelle eines weniger nachhaltigen Abbaus treten. Zum anderen kann durch Derivatisierung und Wirkstoffanpassung die physiologische Wirkung oder die Bioverfügbarkeit eines Wirkstoffes erhöht werden. In dieser Arbeit konnten einige Vertreter der bekannten Wirkstoffklassen C-Arylglykoside und Chalkone durch den Schlüsselschritt der Palladium-katalysierten MATSUDA-HECK-Reaktion synthetisiert werden.
Dazu wurden im Fall der C-Arylglykoside zunächst ungesättigte Kohlenhydrate (Glykale) über eine Ruthenium-katalysierte Zyklisierungsreaktion dargestellt. Diese wurden im Anschluss mit unterschiedlich substituierten Diazoniumsalzen in der oben erwähnten Palladium-katalysierten Kupplungsreaktion zur Reaktion gebracht. Bei der Auswertung der analytischen Daten konnte festgestellt werden, dass stets die trans-Diastereomere gebildet wurden. Im Anschluss konnte gezeigt werden, dass die Doppelbindungen dieser Verbindungen durch Hydrierung, Dihydroxylierung oder Epoxidierung funktionalisiert werden können. Auf diesem Wege konnte u. a. eine dem Diabetesmedikament Dapagliflozin ähnliche Verbindung hergestellt werden.
Im zweiten Teil der Arbeit wurden Arylallylchromanone durch die MATSUDA-HECK-Reaktion von verschiedenen 8-Allylchromanonen mit Diazoniumsalzen dargestellt. Dabei konnte beobachtet werden, dass eine MOM-Schutzgruppe in 7-Position der Moleküle die Darstellung von Produktgemischen unterdrückt und jeweils nur eine der möglichen Verbindungen gebildet wird. Die Lage der Doppelbindung konnte mittels 2D-NMR-Untersuchungen lokalisiert werden. In Kooperation mit der theoretischen Chemie sollte durch Berechnungen untersucht werden, wie die beobachteten Verbindungen entstehen. Durch eine auftretende Wechselwirkung innerhalb des Moleküls konnte allerdings keine explizite Aussage getroffen werden.
Im Anschluss sollten die erhaltenen Verbindungen in einer allylischen Oxidation zu Chalkonen umgesetzt werden. Die Ruthenium-katalysierten Methoden zeigten u. a. keine Eignung. Es konnte allerdings eine metallfreie, Mikrowellen-unterstützte Methode erfolgreich erprobt werden, sodass die Darstellung einiger Vertreter dieser physiologisch aktiven Stoffklasse gelang.
We study travelling chimera states in a ring of nonlocally coupled heterogeneous (with Lorentzian distribution of natural frequencies) phase oscillators. These states are coherence-incoherence patterns moving in the lateral direction because of the broken reflection symmetry of the coupling topology. To explain the results of direct numerical simulations we consider the continuum limit of the system. In this case travelling chimera states correspond to smooth travelling wave solutions of some integro-differential equation, called the Ott–Antonsen equation, which describes the long time coarse-grained dynamics of the oscillators. Using the Lyapunov–Schmidt reduction technique we suggest a numerical approach for the continuation of these travelling waves. Moreover, we perform their linear stability analysis and show that travelling chimera states can lose their stability via fold and Hopf bifurcations. Some of the Hopf bifurcations turn out to be supercritical resulting in the observation of modulated travelling chimera states.
Of Trees and Birds
(2019)
Gisbert Fanselow’s work has been invaluable and inspiring to many researchers working on syntax, morphology, and information structure, both from a theoretical and from an experimental perspective. This volume comprises a collection of articles dedicated to Gisbert on the occasion of his 60th birthday, covering a range of topics from these areas and beyond. The contributions have in common that in a broad sense they have to do with language structures (and thus trees), and that in a more specific sense they have to do with birds. They thus cover two of Gisbert’s major interests in- and outside of the linguistic world (and perhaps even at the interface).
The instrumental -er suffix
(2019)
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean–Vlasov equations as the starting point to derive ensemble Kalman–Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
Water is essential to life and thus, an essential resource. However, freshwater resources are limited and their maintenance is crucial. Pollution with chemicals and pathogens through urbanization and a growing population impair the quality of freshwater. Furthermore, water can serve as vector for the transmission of pathogens resulting in water-borne illness.
The Interdisciplinary Research Group III – "Water" of the Leibniz alliance project INFECTIONS‘21 investigated water as a hub for pathogens focusing on Clostridioides difficile and avian influenza A viruses that may be shed into the water. Another aim of this study was to characterize the bacterial communities in a wastewater treatment plant (WWTP) of the capital Berlin, Germany to further assess potential health risks associated with wastewater management practices.
Bacterial communities of WWTP inflow and effluent differed significantly. The proportion of fecal/enteric bacteria was relatively low and OTUs related to potential enteric pathogens were largely removed from inflow to effluent. However, a health risk might exist as an increased relative abundance of potential pathogenic Legionella spp. such as L. lytica was observed. Three Clostridioides difficile isolates from wastewater inflow and an urban bathing lake in Berlin (‗Weisser See‘) were obtained and sequenced. The two isolates from the wastewater did not carry toxin genes, whereas the isolate from the lake was positive for the toxin genes. All three isolates were closely related to human strains. This indicates a potential, but rather sporadic health risk. Avian influenza A viruses were detected in 38.8% of sediment samples by PCR, but virus isolation failed. An experiment with inoculated freshwater and sediment samples showed that virus isolation from sediment requires relatively high virus concentrations and worked much better in Madin-Darby Canine Kidney (MDCK) cell cultures than in embryonated chicken eggs, but low titre of influenza contamination in freshwater samples was sufficient to recover virus.
In conclusion, this work revealed potential health risks coming from bacterial groups with pathogenic potential such as Legionella spp. whose relative abundance is higher in the released effluent than in the inflow of the investigated WWTP. It further indicates that water bodies such as wastewater and lake sediments can serve as reservoir and vector, even for non-typical water-borne or water-transmitted pathogens such as C. difficile.
How to identify customary international law is an important question of international law. The International Law Commission has in 2018 adopted a set of sixteen conclusions, together with commentaries, on this topic. The paper consists of three parts: First, the reasons are discussed why the Commission came to work on the topic “Identification of customary international law”. Then, some of its conclusions are highlighted. Finally, the outcome of the work of the Commission is placed in a general context, before concluding.
The development of phonological awareness, the knowledge of the structural combinatoriality of a language, has been widely investigated in relation to reading (dis)ability across languages. However, the extent to which knowledge of phonemic units may interact with spoken language organization in (transparent) alphabetical languages has hardly been investigated. The present study examined whether phonemic awareness correlates with coarticulation degree, commonly used as a metric for estimating the size of children’s production units. A speech production task was designed to test for developmental differences in intra-syllabic coarticulation degree in 41 German children from 4 to 7 years of age. The technique of ultrasound imaging allowed for comparing the articulatory foundations of children’s coarticulatory patterns. Four behavioral tasks assessing various levels of phonological awareness from large to small units and expressive vocabulary were also administered. Generalized additive modeling revealed strong interactions between children’s vocabulary and phonological awareness with coarticulatory patterns. Greater knowledge of sub-lexical units was associated with lower intra-syllabic coarticulation degree and greater differentiation of articulatory gestures for individual segments. This interaction was mostly nonlinear: an increase in children’s phonological proficiency was not systematically associated with an equivalent change in coarticulation degree. Similar findings were drawn between vocabulary and coarticulatory patterns. Overall, results suggest that the process of developing spoken language fluency involves dynamical interactions between cognitive and speech motor domains. Arguments for an integrated-interactive approach to skill development are discussed.
Due to its bioavailability and (bio)degradability, poly(lactide) (PLA) is an interesting polymer that is already being used as packaging material, surgical seam, and drug delivery system. Dependent on various parameters such as polymer composition, amphiphilicity, sample preparation, and the enantiomeric purity of lactide, PLA in an amphiphilic block copolymer can affect the self-assembly behavior dramatically. However, sizes and shapes of aggregates have a critical effect on the interactions between biological and drug delivery systems, where the general understanding of these polymers and their ability to influence self-assembly is of significant interest in science.
The first part of this thesis describes the synthesis and study of a series of linear poly(L-lactide) (PLLA) and poly(D-lactide) (PDLA)-based amphiphilic block copolymers with varying PLA (hydrophobic), and poly(ethylene glycol) (PEG) (hydrophilic) chain lengths and different block copolymer sequences (PEG-PLA and PLA-PEG). The PEG-PLA block copolymers were synthesized by ring-opening polymerization of lactide initiated by a PEG-OH macroinitiator. In contrast, the PLA-PEG block copolymers were produced by a Steglich-esterification of modified PLA with PEG-OH.
The aqueous self-assembly at room temperature of the enantiomerically pure PLLA-based block copolymers and their stereocomplexed mixtures was investigated by dynamic light scattering (DLS), transmission electron microscopy (TEM), wide-angle X-ray diffraction (WAXD), and differential scanning calorimetry (DSC). Spherical micelles and worm-like structures were produced, whereby the obtained self-assembled morphologies were affected by the lactide weight fraction in the block copolymer and self-assembly time. The formation of worm-like structures increases with decreasing PLA-chain length and arises from spherical micelles, which become colloidally unstable and undergo an epitaxial fusion with other micelles. As shown by DSC experiments, the crystallinity of the corresponding PLA blocks increases within the self-assembly time. However, the stereocomplexed self-assembled structures behave differently from the parent polymers and result in irregular-shaped clusters of spherical micelles. Additionally, time-dependent self-assembly experiments showed a transformation, from already self-assembled morphologies of different shapes to more compact micelles upon stereocomplexation.
In the second part of this thesis, with the objective to influence the self-assembly of PLA-based block copolymers and its stereocomplexes, poly(methyl phosphonate) (PMeP) and poly(isopropyl phosphonate) (PiPrP) were produced by ring-opening polymerization to implement an alternative to the hydrophilic block PEG. Although, the 1,8 diazabicyclo[5.4.0]unde 7 ene (DBU) or 1,5,7 triazabicyclo[4.4.0]dec-5-ene (TBD) mediated synthesis of the corresponding poly(alkyl phosphonate)s was successful, however, not so the polymerization of copolymers with PLA-based precursors (PLA-homo polymers, and PEG-PLA block copolymers). Transesterification, obtained by 1H-NMR spectroscopy, between the poly(phosphonate)- and PLA block caused a high-field shifted peak split of the methine proton in the PLA polymer chain, with split intensities depending on the used catalyst (DBU for PMeP, and TBD for PiPrP polymerization). An additional prepared block copolymer PiPrP-PLLA that wasn’t affected in its polymer sequence was finally used for self-assembly experiments with PLA-PEG and PEG-PLA mixing.
This work provides a comprehensive study of the self-assembly behavior of PLA-based block copolymers influenced by various parameters such as polymer block lengths, self-assembly time, and stereocomplexation of block copolymer mixtures.
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering ~10% of the permafrost region. Lake area loss (−1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (<10−5%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
The advances in modern geodetic techniques such as the global navigation satellite system (GNSS) and synthetic aperture radar (SAR) provide surface deformation measurements with an unprecedented accuracy and temporal and spatial resolutions even at most remote volcanoes on Earth. Modelling of the high-quality geodetic data is crucial for understanding the underlying physics of volcano deformation processes. Among various approaches, mathematical models are the most effective for establishing a quantitative link between the surface displacements and the shape and strength of deformation sources. Advancing the geodetic data analyses and hence, the knowledge on the Earth’s interior processes, demands sophisticated and efficient deformation modelling approaches. Yet the majority of these models rely on simplistic assumptions for deformation source geometries and ignore complexities such as the Earth’s surface topography and interactions between multiple sources.
This thesis addresses this problem in the context of analytical and numerical volcano deformation modelling. In the first part, new analytical solutions for triangular dislocations (TDs) in uniform infinite and semi-infinite elastic media have been developed. Through a comprehensive investigation, the locations and causes of artefact singularities and numerical instabilities associated with TDs have been determined and these long-standing drawbacks have been addressed thoroughly. This approach has then been extended to rectangular dislocations (RDs) with full rotational degrees of freedom. Using this solution in a configuration of three orthogonal RDs a compound dislocation model (CDM) has been developed. The CDM can represent generalized volumetric and planar deformation sources efficiently. Thus, the CDM is relevant for rapid inversions in early warning systems and can also be used for detailed deformation analyses. In order to account for complex source geometries and realistic topography in the deformation models, in this thesis the boundary element method (BEM) has been applied to the new solutions for TDs. In this scheme, complex surfaces are simulated as a continuous mesh of TDs that may possess any displacement or stress boundary conditions in the BEM calculations. In the second part of this thesis, the developed modelling techniques have been applied to five different real-world deformation scenarios. As the first and second case studies the deformation sources associated with the 2015 Calbuco eruption and 2013–2016 Copahue inflation period have been constrained by using the CDM. The highly anisotropic source geometries in these two cases highlight the importance of using generalized deformation models such as the CDM, for geodetic data inversions. The other three case studies in this thesis involve high-resolution dislocation models and BEM calculations. As the third case, the 2013 pre-explosive inflation of Volcán de Colima has been simulated by using two ellipsoidal cavities, which locate zones of pressurization in the volcano’s lava dome. The fourth case study, which serves as an example for volcanotectonics interactions, the 3-D kinematics of an active ring-fault at Tendürek volcano has been investigated through modelling displacement time series over the 2003–2010 time period. As the fifth example, the deformation sources associated with North Korea’s underground nuclear test in September 2017 have been constrained. These examples demonstrate the advancement and increasing level of complexity and the general applicability of the developed dislocation modelling techniques.
This thesis establishes a unified framework for rapid and high-resolution dislocation modelling, which in addition to volcano deformations can also be applied to tectonic and humanmade deformations.
Restful choreographies
(2019)
Business process management has become a key instrument to organize work as many companies represent their operations in business process models. Recently, business process choreography diagrams have been introduced as part of the Business Process Model and Notation standard to represent interactions between business processes, run by different partners. When it comes to the interactions between services on the Web, Representational State Transfer (REST) is one of the primary architectural styles employed by web services today. Ideally, the RESTful interactions between participants should implement the interactions defined at the business choreography level.
The problem, however, is the conceptual gap between the business process choreography diagrams and RESTful interactions. Choreography diagrams, on the one hand, are modeled from business domain experts with the purpose of capturing, communicating and, ideally, driving the business interactions. RESTful interactions, on the other hand, depend on RESTful interfaces that are designed by web engineers with the purpose of facilitating the interaction between participants on the internet. In most cases however, business domain experts are unaware of the technology behind web service interfaces and web engineers tend to overlook the overall business goals of web services. While there is considerable work on using process models during process implementation, there is little work on using choreography models to implement interactions between business processes. This thesis addresses this research gap by raising the following research question: How to close the conceptual gap between business process choreographies and RESTful interactions? This thesis offers several research contributions that jointly answer the research question.
The main research contribution is the design of a language that captures RESTful interactions between participants---RESTful choreography modeling language. Formal completeness properties (with respect to REST) are introduced to validate its instances, called RESTful choreographies. A systematic semi-automatic method for deriving RESTful choreographies from business process choreographies is proposed. The method employs natural language processing techniques to translate business interactions into RESTful interactions. The effectiveness of the approach is shown by developing a prototypical tool that evaluates the derivation method over a large number of choreography models.
In addition, the thesis proposes solutions towards implementing RESTful choreographies. In particular, two RESTful service specifications are introduced for aiding, respectively, the execution of choreographies' exclusive gateways and the guidance of RESTful interactions.
Electrets are dielectrics with quasi-permanent electric charge and/or dipoles, sometimes can be regarded as an electric analogy to a magnet. Since the discovery of the excellent charge retention capacity of poly(tetrafluoro ethylene) and the invention of the electret microphone, electrets have grown out of a scientific curiosity to an important application both in science and technology. The history of electret research goes hand in hand with the quest for new materials with better capacity at charge and/or dipole retention. To be useful, electrets normally have to be charged/poled to render them electro-active. This process involves electric-charge deposition and/or electric dipole orientation within the dielectrics ` surfaces and bulk. Knowledge of the spatial distribution of electric charge and/or dipole polarization after their deposition and subsequent decay is crucial in the task to improve their stability in the dielectrics.
Likewise, for dielectrics used in electrical insulation applications, there are also needs for accumulated space-charge and polarization spatial profiling. Traditionally, space-charge accumulation and large dipole polarization within insulating dielectrics is considered undesirable and harmful to the insulating dielectrics as they might cause dielectric loss and could lead to internal electric field distortion and local field enhancement. High local electric field could trigger several aging processes and reduce the insulating dielectrics' lifetime. However, with the advent of high-voltage DC transmission and high-voltage capacitor for energy storage, these are no longer the case. There are some overlapped between the two fields of electrets and electric insulation. While quasi-permanently trapped electric-charge and/or large remanent dipole polarization are the requisites for electret operation, stably trapped electric charge in electric insulation helps reduce electric charge transport and overall reduced electric conductivity. Controlled charge trapping can help in preventing further charge injection and accumulation as well as serving as field grading purpose in insulating dielectrics whereas large dipole polarization can be utilized in energy storage applications.
In this thesis, the Piezoelectrically-generated Pressure Steps (PPSs) were employed as a nondestructive method to probe the electric-charge and dipole polarization distribution in a range of thin film (several hundred micron) polymer-based materials, namely polypropylene (PP), low-density polyethylene/magnesium oxide (LDPE/MgO) nanocomposites and poly(vinylidene fluoride-co- trifluoro ethylene) (P(VDF-TrFE)) copolymer. PP film surface-treated with phosphoric acid to introduce surfacial isolated nanostructures serves as example of 2-dimensional nano-composites whereas LDPE/MgO serves as the case of 3-dimensional nano-composites with MgO nano-particles dispersed in LDPE polymer matrix. It is evidenced that the nanoparticles on the surface of acid-treated PP and in the bulk of LDPE/MgO nanocomposites improve charge trapping capacity of the respective material and prevent further charge injection and transport and that the enhanced charge trapping capacity makes PP and LDPE/MgO nanocomposites potential materials for both electret and electrical insulation applications. As for PVDF and VDF-based copolymers, the remanent spatial polarization distribution depends critically on poling method as well as specific parameters used in the respective poling method. In this work, homogeneous polarization poling of P(VDF-TrFE) copolymers with different VDF-contents have been attempted with hysteresis cyclical poling. The behaviour of remanent polarization growth and spatial polarization distribution are reported and discussed. The Piezoelectrically-generated Pressure Steps (PPSs) method has proven as a powerful method for the charge storage and transport characterization of a wide range of polymer material from nonpolar, to polar, to polymer nanocomposites category.
This article explores, whether domestic judges might be held accountable under international criminal law (ICL). To date, international criminal justice has almost entirely focused on prosecuting political or military leaders. The Justice Case tried before the Nuremberg Military Tribunal in 1946 marks the most prominent exception. Prior to it, the judiciary – otherwise considered the epitome of justice – had mutated into a murderous machinery under Nazi rule. Judicial decisions do have far-reaching implications possibly constituting or contributing to international crimes. This holds true in a wide range of cases, for instance on practices of warfare and torture, on the use of certain weapon technologies, or on policies relating to minorities or racial segregation. I argue that domestic judges are accountable when engaging in international crimes. The article delves into technical aspects of criminal law; as well as the notions of judicial independence and immunity. While guaranteeing the rule of law, these two notions challenge the core idea of ICL: its equal application vis-à-vis all perpetrators of international crimes irrespective of official capacity. In order to differentiate due judicial conduct and its abuse in violation of ICL, I suggest a threshold a judicial act needs to exceed for entailing accountability for an international crime.
Topologische Datenanalyse
(2019)
Bei der Analyse von höherdimensionalen Daten kann deren Gestalt wichtige Informationen über den Datensatz liefern. Bei einer gegebenen Punktwolke, die aus einem unbekannten topologischen Raum ausgewählt wurde, versucht die Topologische Datenanalyse (TDA) den ursprünglichen Raum zu rekonstruieren. Dieser Beitrag soll eine Einführung in die Topologische Datenanalyse geben und konzentriert sich dabei auf zwei wichtige Aspekte: die Persistente Homologie und den Mapper. Dabei werden zuerst die notwendigen theoretischen Grundlagen vorgestellt und anschließend wird die Methodik bei der Visualisierung von Daten eingesetzt.
Die Persistente Homologie ist eines der Standardwerkzeuge in der TDA. Sie findet ihre Anwendung beispielsweise in den Bereichen Formerkennung und -beschreibung. Der Mapper als zweites wichtiges Konzept der TDA wandelt umfangreiche, höherdimensionale Datensätze in Simplizialkomplexe um und kann dadurch geometrische und topologische Eigenschaften der Daten bestimmen. Des Weiteren ist die Mapper-Methode ein brauchbares Werkzeug zur Visualisierungen von mehrdimensionalen Daten, woran statistische Verfahren scheitern.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
Due to advances in science and technology towards smaller and more powerful processing units, the fabrication of micrometer sized machines for different tasks becomes more and more possible. Such micro-robots could revolutionize medical treatment of diseases and shall support to work on other small machines. Nevertheless, scaling down robots and other devices is a challenging task and will probably remain limited in near future. Over the past decade the concept of bio-hybrid systems has proved to be a promising approach in order to advance the further development of micro-robots. Bio-hybrid systems combine biological cells with artificial components, thereby benefiting from the functionality of living biological cells. Cell-driven micro-transport is one of the most prominent applications in the emerging field of these systems. So far, micrometer sized cargo has been successfully transported by means of swimming bacterial cells. The potential of motile adherent cells as transport systems has largely remained unexplored.
This thesis concentrates on the social amoeba Dictyostelium discoideum as a potential candidate for an amoeboid bio-hybrid transport system. The use of this model organism comes with several advantages. Due to the unspecific properties of Dictyostelium adhesion, a wide range of different cargo materials can be used for transport. As amoeboid cells exceed bacterial cells in size by one order of magnitude, also the size of an object carried by a single cell can also be much larger for an amoeba. Finally it is possible to guide the cell-driven transport based on the chemotactic behavior of the amoeba. Since cells undergo a developmentally induced chemotactic aggregation, cargo could be assembled in a self-organized manner into a cluster. It is also possible to impose an external chemical gradient to guide the amoeboid transport system to a desired location.
To establish Dictyostelium discoideum as a possible candidate for bio-hybrid transport systems, this thesis will first investigate the movement of single cells. Secondly, the interaction of cargo and cells will be studied. Eventually, a conceptional proof will be conducted, that the cheomtactic behavior can be exploited either to transport a cargo self-organized or through an external chemical source.
Expanding public or publicly subsidized childcare has been a top social policy priority in many industrialized countries. It is supposed to increase fertility, promote children’s development and enhance mothers’ labor market attachment. In this paper, we analyze the causal effect of one of the largest expansions of subsidized childcare for children up to three years among industrialized countries on the employment of mothers in Germany. Identification is based on spatial and temporal variation in the expansion of publicly subsidized childcare triggered by two comprehensive childcare policy reforms. The empirical analysis is based on the German Microcensus that is matched to county level data on childcare availability. Based on our preferred specification which includes time and county fixed effects we find that an increase in childcare slots by one percentage point increases mothers’ labor market participation rate by 0.2 percentage points. The overall increase in employment is explained by the rise in part-time employment with relatively long hours (20-35 hours per week). We do not find a change in full-time employment or lower part-time employment that is causally related to the childcare expansion. The effect is almost entirely driven by mothers with medium-level qualifications. Mothers with low education levels do not profit from this reform calling for a stronger policy focus on particularly disadvantaged groups in coming years.
The paper extends a static discrete-choice labor supply model by adding participation and hours constraints. We identify restrictions by survey information on the eligibility and search activities of individuals as well as actual and desired hours. This provides for a more robust identification of preferences and constraints. Both, preferences and restrictions are allowed to vary by and are related through observed and unobserved characteristics. We distinguish various restrictions mechanisms: labor demand rationing, working hours norms varying across occupations, and insufficient public childcare on the supply side of the market. The effect of these mechanisms is simulated by relaxing different constraints at a time. We apply the empirical frame- work to evaluate an in-work benefit for low-paid parents in the German institutional context. The benefit is supposed to increase work incentives for secondary earners. Based on the structural model we are able to disentangle behavioral reactions into the pure incentive effect and the limiting impact of constraints at the intensive and extensive margin. We find that the in-work benefit for parents substantially increases working hours of mothers of young children, especially when they have a low education. Simulating the effects of restrictions shows their substantial impact on employment of mothers with young children.
Background: Core-specific sensorimotor exercises are proven to enhance neuromuscular activity of the trunk, improve athletic performance and prevent back pain. However, the dose-response relationship and, therefore, the dose required to improve trunk function is still under debate. The purpose of the present trial will be to compare four different intervention strategies of sensorimotor exercises that will result in improved trunk function.
Methods/design: A single-blind, four-armed, randomized controlled trial with a 3-week (home-based) intervention phase and two measurement days pre and post intervention (M1/M2) is designed. Experimental procedures on both measurement days will include evaluation of maximum isokinetic and isometric trunk strength (extension/flexion, rotation) including perturbations, as well as neuromuscular trunk activity while performing strength testing. The primary outcome is trunk strength (peak torque). Neuromuscular activity (amplitude, latencies as a response to perturbation) serves as secondary outcome. The control group will perform a standardized exercise program of four sensorimotor exercises (three sets of 10 repetitions) in each of six training sessions (30 min duration) over 3 weeks. The intervention groups’ programs differ in the number of exercises, sets per exercise and, therefore, overall training amount (group I: six sessions, three exercises, two sets; group II: six sessions, two exercises, two sets; group III: six sessions, one exercise, three sets). The intervention programs of groups I, II and III include additional perturbations for all exercises to increase both the difficulty and the efficacy of the exercises performed. Statistical analysis will be performed after examining the underlying assumptions for parametric and non-parametric testing.
Discussion: The results of the study will be clinically relevant, not only for researchers but also for (sports) therapists, physicians, coaches, athletes and the general population who have the aim of improving trunk function.
Abstract
The emerging diffusive dynamics in many complex systems show a characteristic crossover behaviour from anomalous to normal diffusion which is otherwise fitted by two independent power-laws. A prominent example for a subdiffusive–diffusive crossover are viscoelastic systems such as lipid bilayer membranes, while superdiffusive–diffusive crossovers occur in systems of actively moving biological cells. We here consider the general dynamics of a stochastic particle driven by so-called tempered fractional Gaussian noise, that is noise with Gaussian amplitude and power-law correlations, which are cut off at some mesoscopic time scale. Concretely we consider such noise with built-in exponential or power-law tempering, driving an overdamped Langevin equation (fractional Brownian motion) and fractional Langevin equation motion. We derive explicit expressions for the mean squared displacement and correlation functions, including different shapes of the crossover behaviour depending on the concrete tempering, and discuss the physical meaning of the tempering. In the case of power-law tempering we also find a crossover behaviour from faster to slower superdiffusion and slower to faster subdiffusion. As a direct application of our model we demonstrate that the obtained dynamics quantitatively describes the subdiffusion–diffusion and subdiffusion–subdiffusion crossover in lipid bilayer systems. We also show that a model of tempered fractional Brownian motion recently proposed by Sabzikar and Meerschaert leads to physically very different behaviour with a seemingly paradoxical ballistic long time scaling.
The North China Plain (NCP) is one of the most productive and intensive agricultural regions in China. High doses of mineral nitrogen (N) fertiliser, often combined with flood irrigation, are applied, resulting in N surplus, groundwater depletion and environmental pollution. The objectives of this thesis were to use the HERMES model to simulate the N cycle in winter wheat (Triticum aestivum L.)–summer maize (Zea mays L.) double crop rotations and show the performance of the HERMES model, of the new ammonia volatilisation sub-module and of the new nitrification inhibition tool in the NCP. Further objectives were to assess the models potential to save N and water on plot and county scale, as well as on short and long-term. Additionally, improved management strategies with the help of a model-based nitrogen fertiliser recommendation (NFR) and adapted irrigation, should be found.
Results showed that the HERMES model performed well under growing conditions of the NCP and was able to describe the relevant processes related to soil–plant interactions concerning N and water during a 2.5 year field experiment. No differences in grain yield between the real-time model-based NFR and the other treatments of the experiments on plot scale in Quzhou County could be found. Simulations with increasing amounts of irrigation resulted in significantly higher N leaching, higher N requirements of the NFR and reduced yields. Thus, conventional flood irrigation as currently practised by the farmers bears great uncertainties and exact irrigation amounts should be known for future simulation studies. In the best-practice scenario simulation on plot-scale, N input and N leaching, but also irrigation water could be reduced strongly within 2 years. Thus, the model-based NFR in combination with adapted irrigation had the highest potential to reduce nitrate leaching, compared to farmers practice and mineral N (Nmin)-reduced treatments. Also the calibrated and validated ammonia volatilisation sub-module of the HERMES model worked well under the climatic and soil conditions of northern China. Simple ammonia volatilisation approaches gave also satisfying results compared to process-oriented approaches. During the simulation with Ammonium sulphate Nitrate with nitrification inhibitor (ASNDMPP) ammonia volatilisation was higher than in the simulation without nitrification inhibitor, while the result for nitrate leaching was the opposite. Although nitrification worked well in the model, nitrification-born nitrous oxide emissions should be considered in future. Results of the simulated annual long-term (31 years) N losses in whole Quzhou County in Hebei Province were 296.8 kg N ha−1 under common farmers practice treatment and 101.7 kg N ha−1 under optimised treatment including NFR and automated irrigation (OPTai). Spatial differences in simulated N losses throughout Quzhou County, could only be found due to different N inputs. Simulations of an optimised treatment, could save on average more than 260 kg N ha−1a−1 from fertiliser input and 190 kg N ha−1a−1 from N losses and around 115.7 mm a−1 of water, compared to farmers practice. These long-term simulation results showed lower N and water saving potential, compared to short-term simulations and underline the necessity of long-term simulations to overcome the effect of high initial N stocks in soil.
Additionally, the OPTai worked best on clay loam soil except for a high simulated denitrification loss, while the simulations using farmers practice irrigation could not match the actual water needs resulting in yield decline, especially for winter wheat. Thus, a precise adaption of management to actual weather conditions and plant growth needs is necessary for future simulations. However, the optimised treatments did not seem to be able to maintain the soil organic matter pools, even with full crop residue input. Extra organic inputs seem to be required to maintain soil quality in the optimised treatments.
HERMES is a relatively simple model, with regard to data input requirements, to simulate the N cycle. It can offer interpretation of management options on plot, on county and regional scale for extension and research staff. Also in combination with other N and water saving methods the model promises to be a useful tool.
Bildungsort Familie
(2019)
In der Bildungs- und Familienforschung wird die intergenerationale Weitergabe von Bildung innerhalb der Familie hauptsächlich unter dem Blickwinkel des schulischen Erfolges der nachwachsenden Generation thematisiert. „Wie“ aber bildungsbezogene Transferprozesse innerhalb der Familie konkret ablaufen, bleibt jedoch in der deutschen Forschungslandschaft weitestgehend unbearbeitet. An dieser Stelle setzt diese qualitativ angelegte Arbeit an. Ziel dieser Arbeit ist, bildungsbezogene Transferprozesse innerhalb von russischen Dreigenerationenfamilien, die aus der ehemaligen Sowjetunion nach Berlin seit 1989 ausgewandert sind und zwischen der Großeltern-, Elterngeneration und der Enkelgeneration ablaufen, zu untersuchen. Hinter diesen Transferprozessen verbergen sich im Sinne Bourdieus bewusste und unbewusste Bildungsstrategien der interviewten Familienmitglieder. Im Rahmen dieser Arbeit wurden zwei Spätaussiedlerfamilien – zu diesen zählen Familie Hoffmann und Familie Popow, sowie zwei russisch-jüdische Familien – zu diesen zählen Familie Rosenthal und Familie Buchbinder, interviewt. Es wurden mit den einzelnen Mitgliedern der vier untersuchten Dreigenerationenfamilien Gruppendiskussionen sowie mit je einem Vertreter einer Generation leitfadengestützte Einzelinterviews geführt. Die Erhebungsphase fand in Berlin im Zeitraum von 2010 bis 2012 statt. Das auf diese Weise gewonnene empirische Material wurde mithilfe der dokumentarischen Methode nach Bohnsack ausgewertet. Hierdurch wurde es möglich die implizite Selbstverständlichkeit, mit der sich Bildung in Familien nach Bourdieu habituell vollzieht, einzufangen und rekonstruierbar zu machen. In der Arbeit wurden eine habitustheoretische Interpretation der russischen Dreigenerationenfamilien und die entsprechende Feldanalyse nach Bourdieu vorgenommen. In diesem Zusammenhang wurde der soziale Raum der untersuchten Familien in der Ankunftsgesellschaft bezüglich ihres Vergleichshorizontes der Herkunftsgesellschaft rekonstruiert. Weiter wurde der Bildungstransfer vor dem jeweiligen Erlebnishintergrund der einzelnen Familien untersucht und diesbezüglich eine Typisierung vorgenommen.
Im Rahmen dieser Untersuchung konnten neue Erkenntnisse zum bisher unerforschten Feld des Bildungstransfers russischer Dreigenerationenfamilien in Berlin gewonnen werden. Ein wesentliches Ergebnis dieser Arbeit ist, dass die Anwendung von Bourdieus Klassentheorie auch auf Gruppen, die in einer sozialistischen Gesellschaft sozialisiert wurden und in eine kapitalistisch orientierte Gesellschaft ausgewandert sind, produktiv sein kann. Ein weiteres zentrales Ergebnis der Studie ist, dass bei zwei der vier untersuchten Familien die Migration den intergenerationalen Bildungstransfer beeinflusste. In diesem Zusammenhang weist Familie Rosenthal durch die Migration einen „gespaltenen“ Habitus auf. Dieser ist darauf zurückzuführen, dass diese Familie bei der Planung des Berufes für die Enkelin in Berlin sich am Praktischen und Notwendigen orientierte. Während die bewusste Bildungsstrategie der Großeltern- und Elterngeneration für die Enkelgeneration im Ankunftsland dem Habitus der Notwendigkeit, den Bourdieu der Arbeiterklasse zuschreibt, zugeordnet werden kann, lässt sich hingegen das Freizeitverhalten der Familie Rosenthal dem Habitus der Distinktion zuordnen, der typisch für die herrschende Klasse ist. Ein weiterer Befund dieser Untersuchung ist, dass im Vergleich zur Enkelin Rosenthal bei der Enkelin Popow eine sogenannte Sphärendiskrepanz rekonstruiert wurde. So ist die Enkelin Popow in der äußeren Sphäre der Schule auf sich gestellt, da die Großeltern- und Elterngeneration zum deutschen Schulsystem nur über einen geringen Informationsstand verfügen. Die Enkelin grenzt sich einerseits von ihrer Familie (innere Sphäre) und deutschen Schulabbrechern (äußere Sphäre) ab, orientiert sich aber andererseits beim Versuch sozial aufzusteigen an russischsprachigen Peers, die die gymnasiale Oberstufe besuchen (dritte Sphäre). Bei Enkelin Popow fungiert demzufolge die Peergruppe und nicht die Familie als zentraler Bildungsort. An dieser Stelle sei angemerkt, dass sowohl bei einer russisch-jüdischen Familie als auch bei einer Spätaussiedlerfamilie der intergenerationale Bildungstransfer durch die Migration beeinflusst wurde. Während Familie Rosenthal in der Herkunftsgesellschaft der Intelligenzija zuzuordnen ist, gehört Familie Popow der Arbeiterschaft an. Daraus folgt, dass der intergenerationale Bildungstransfer der untersuchten Familien sowohl unabhängig vom Spätaussiedler- und Kontingentflüchtlingsstatus als auch vom herkunftsortspezifischen sozialen Status abläuft. Demnach kann geschlussfolgert werden, dass im Rahmen dieser Studie die Migration ein zentraler Faktor für den intergenerationalen Bildungstransfer ist.
Dynamic earthquake rupture modeling provides information on the rupture physics as the rupture velocity, frictions or tractions acting during the rupture process. Nevertheless, as often based on spatial gridded preset geometries, dynamic modeling is depending on many free parameters leading to both a high non-uniqueness of the results and large computation times. That decreases the possibilities of full Bayesian error analysis.
To assess the named problems we developed the quasi-dynamic rupture model which is presented in this work. It combines the kinematic Eikonal rupture model with a boundary element method for quasi-static slip calculation.
The orientation of the modeled rupture plane is defined by a previously performed moment tensor inversion. The simultanously inverted scalar seismic moment allows an estimation of the extension of the rupture. The modeled rupture plane is discretized by a set of rectangular boundary elements. For each boundary element an applied traction vector is defined as the boundary value.
For insights in the dynamic rupture behaviour the rupture front propagation is calculated for incremental time steps based on the 2D Eikonal equation. The needed location-dependent rupture velocity field is assumed to scale linearly with a layered shear wave velocity field.
At each time all boundary elements enclosed within the rupture front are used to calculate the quasi-static slip distribution. Neither friction nor stress propagation are considered. Therefore the algorithm is assumed to be “quasi-static”. A series of the resulting quasi-static slip snapshots can be used as a quasi-dynamic model of the rupture process.
As many a priori information is used from the earth model (shear wave velocity and elastic parameters) and the moment tensor inversion (rupture extension and orientation) our model is depending on few free parameters as the traction field, the linear factor between rupture and shear wave velocity and the nucleation point and time. Hence stable and fast modeling results are obtained as proven from the comparison to different infinite and finite static crack solutions.
First dynamic applications show promissing results. The location-dependent rise time is automatically derived by the model. Different simple kinematic models as the slip-pulse or the penny-shaped crack model can be reproduced as well as their corresponding slip rate functions. A source time function (STF) approximation calculated from the cumulative sum of moment rates of each boundary element gives results similar to theoretical and empirical known STFs.
The model was also applied to the 2015 Illapel earthquake. Using a simple rectangular rupture geometry and a 2-layered traction regime yields good estimates of both the rupture front propagation and the slip patterns which are comparable to literature results. The STF approximation shows a good fit with previously published STFs.
The quasi-dynamic rupture model is hence able to fastly calculate reproducable slip results. That allows to test full Bayesian error analysis in the future. Further work on a full seismic source inversion or even a traction field inversion can also extend the scope of our model.
L’extériorisation de toute communication est assujettie à un mode d’accès du locuteur aux informations véhiculées. Les constatations faites de nos données prouvent que tous les huit verbes étudiés traduisent des mécanismes d’acquisition des connaissances que nous avons appelés en emprunt à (Vogeleer, 1995 :92) « l’accès cognitif au savoir ». C’est cette valeur intrinsèque qui vaut à ces termes la dénomination de verbes médiatifs. En d’autres mots, ce sont des éléments qui explicitent des processus d’accès du locuteur au savoir. Une source du savoir qui peut être directe (la vue, le touché, l’ouïe, l’odorat…) ou indirecte (ouï-dire) et surtout inférée. Nous entendons par inférence un processus d’analyse et de mise en relation d’éléments (prémisses), lesquelles permettent de tirer une conclusion par déduction, induction ou par abduction. Et selon que lesdites prémisses tendent à être plus ou moins fiables, ces processus inférentiels impliqueront des valeurs épistémiques à des degrés divers.
Sur le plan rhétorico-syntaxique, nos analyses ont montré tous les verbes cognitifs (VC) de cette étude exigent l’occurrence d’autres constituants (actants) phrastiques qu’ils régissent. C’est grâce à cette valence verbale qu’ils gardent un pouvoir rectionnel dans les constructions asyndétiques. Ce sont donc les matrices des éléments sur lesquels ils se rapportent. Quant au cinétisme de ces verbes, il possède une fonction rhétorique et syntaxique. En effet, cet agencement particulier et souvent perturbant permet de traduire l’expression d’une figure de syntaxe à effet rhétorique : l’hyperbate. Une construction atypique qui, à travers les agencements anticonformistes, donne un sens de regressivité à l’énoncé et confère une saillance à des termes mis ce fait en exergue.
Using two crystals for spontaneous parametric down-conversion in a parallel setup, we observe two-photon interference with high visibility. The high visibility is consistent with complementarity and the absence of which-path information. The observations are explained as the effects of entanglement or equivalently in terms of interfering probability amplitudes and also by the calculation of a second-order field correlation function in the Heisenberg picture. The latter approach brings out explicitly the role of the vacuum fields in the down-conversion at the crystals and in the photon coincidence counting. For comparison, we show that the Hong–Ou–Mandel dip can be explained by the same approach in which the role of the vacuum signal and idler fields, as opposed to entanglement involving vacuum states, is emphasized. We discuss the fundamental limitations of a theory in which these vacuum fields are treated as classical, stochastic fields.
Im Rahmen dieser Arbeit wird anhand von neuartigen Materialien das Potential der Europium-Lumineszenz für die strukturelle Analyse dargestellt. Bei diesen Materialien handelt es sich zum einen um Nanopartikel mit Matrizes aus mehreren Metall-Mischoxiden und Dotierungen durch die Sonde Europium und zum anderen um Metallorganische Netzwerke (MOFs), die mit Neodym , Samarium- und Europium-Ionen beladen sind.
Die Synthese der aus der Kombination von Metalloxiden enthaltenen Nanopartikel ist unter milden Bedingungen mithilfe von speziell dafür hergestellten Reagenzien erfolgt und hat zu sehr kleinen, amorphen Nanopartikeln geführt. Durch eine nachfolgende Temperaturbehandlung hat sich die Kristallinität erhöht. Damit verbunden haben sich auch die Kristallstruktur sowie die Position des Dotanden Europium verändert.
Während die etablierte Methode der Röntgendiffraktometrie einen Blick auf das Kristallgitter als Gesamtes ermöglicht, so trifft die Lumineszenz des Europiums durch die Sichtbarkeit einzelner Stark-Aufspaltungen Aussagen über dessen lokale Symmetrien. Die Symmetrie wird durch Sauerstofffehlstellen verändert, welche die Sauerstoffleitfähigkeit der Nanopartikel beeinflussen. Diese ist für die Anwendung als Katalysatoren in industriellen Prozessen und ebenso als Sensoren und Therapeutika in biologischen Systemen von Bedeutung.
Zur ersten katalytischen Charakterisierung werden die Proben mittels Temperatur-programmierter Reduktion untersucht. Des Weiteren werden die Mischoxid-Nanopartikel auch hinsichtlich ihrer Verwendbarkeit als Matrix in Aufkonversionsprozessen untersucht.
Die Metallorganischen Netzwerke eignen sich aufgrund ihrer mikroporösen Struktur für Anwendungen in der Speicherung gleichermaßen von Nutzgasen wie auch von Schadstoffen. Ebenfalls ist eine biologische Anwendung denkbar, die insbesondere den Bereich der drug delivery-Reagenzien betrifft.
Erfolgt in die mikroporösen Strukturen der Metallorganischen Netzwerke die Einlagerung von Lanthanoid-Ionen, so können diese bei der entsprechenden Kombination als Weißlicht-Emittierer fungieren. Dabei ist neben den Verhältnissen zwischen den Lanthanoid-Ionen auch die genaue Position innerhalb des Netzwerks sowie die Distanz zu anderen Ionen von Interesse. Zur Untersuchung dieser Fragestellungen wird die Umgebungssensitivität der Europium-Lumineszenz ausgenutzt. Die auf diese Weise festgestellte Formiat-Bildung hängt von zahlreichen Parametern ab.
Insgesamt stellt sich die im Rahmen dieser Arbeit verwendete Methodik des Einsatzes von Europium als strukturelle Sonde in höchstem Maße vielseitig dar und zeigt seine größte Stärke in der Kombination mit weiteren Methoden der Strukturanalytik. Die auf diese Weise genauestens charakterisierten neuartigen Materialien können nun gezielt und anwendungsfokussiert weiterentwickelt werden.
Aluminum oxide is an Earth-abundant geological material, and its interaction with water is of crucial importance for geochemical and environmental processes. Some aluminum oxide surfaces are also known to be useful in heterogeneous catalysis, while the surface chemistry of aqueous oxide interfaces determines the corrosion, growth and dissolution of such materials. In this doctoral work, we looked mainly at the (0001) surface of α-Al 2 O 3 and its reactivity towards water. In particular, a great focus of this work is dedicated to simulate and address the vibrational spectra of water adsorbed on the α-alumina(0001) surface in various conditions and at different coverages. In fact, the main source of comparison and inspiration for this work comes from the collaboration with the “Interfacial Molecular Spectroscopy” group led by Dr. R. Kramer Campen at the Fritz-Haber Institute of the MPG in Berlin. The expertise of our project partners in surface-sensitive Vibrational Sum Frequency (VSF) generation spectroscopy was crucial to develop and adapt specific simulation schemes used in this work. Methodologically, the main approach employed in this thesis is Ab Initio Molecular Dynamics (AIMD) based on periodic Density Functional Theory (DFT) using the PBE functional with D2 dispersion correction. The analysis of vibrational frequencies from both a static and a dynamic, finite-temperature perspective offers the ability to investigate the water / aluminum oxide interface in close connection to experiment.
The first project presented in this work considers the characterization of dissociatively adsorbed deuterated water on the Al-terminated (0001) surface. This particular structure is known from both experiment and theory to be the thermodynamically most stable surface termination of α-alumina in Ultra-High Vacuum (UHV) conditions. Based on experiments performed by our colleagues at FHI, different adsorption sites and products have been proposed and identified for D 2 O. While previous theoretical investigations only looked at vibrational frequencies of dissociated OD groups by staticNormal Modes Analysis (NMA), we rather employed a more sophisticated approach to directly assess vibrational spectra (like IR and VSF) at finite temperature from AIMD. In this work, we have employed a recent implementation which makes use of velocity-velocity autocorrelation functions to simulate such spectral responses of O-H(D) bonds. This approach allows for an efficient and qualitatively accurate estimation of Vibrational Densities of States (VDOS) as well as IR and VSF spectra, which are then tested against experimental spectra from our collaborators.
In order to extend previous work on unimolecularly dissociated water on α-Al 2 O 3 , we then considered a different system, namely, a fully hydroxylated (0001) surface, which results from the reconstruction of the UHV-stable Al-terminated surface at high water contents. This model is then further extended by considering a hydroxylated surface with additional water molecules, forming a two-dimensional layer which serves as a potential template to simulate an aqueous interface in environmental conditions. Again, employing finite-temperature AIMD trajectories at the PBE+D2 level, we investigated the behaviour of both hydroxylated surface (HS) and the water-covered structure derived from it (known as HS+2ML). A full range of spectra, from VDOS to IR and VSF, is then calculated using the same methodology, as described above. This is the main focus of the second project, reported in Chapter 5. In this case, comparison between theoretical spectra and experimental data is definitely good. In particular, we underline the nature of high-frequency resonances observed above 3700 cm −1 in VSF experiments to be associated with surface OH-groups, known as “aluminols” which are a key fingerprint of the fully hydroxylated surface.
In the third and last project, which is presented in Chapter 6, the extension of VSF spectroscopy experiments to the time-resolved regime offered us the opportunity to investigate vibrational energy relaxation at the α-alumina / water interface. Specifically, using again DFT-based AIMD simulations, we simulated vibrational lifetimes for surface aluminols as experimentally detected via pump-probe VSF. We considered the water-covered HS model as a potential candidate to address this problem. The vibrational (IR) excitation and subsequent relaxation is performed by means of a non-equilibrium molecular dynamics scheme. In such a scheme, we specifically looked at the O-H stretching mode of surface aluminols. Afterwards, the analysis of non-equilibrium trajectories allows for an estimation of relaxation times in the order of 2-4 ps which are in overall agreement with measured ones.
The aim of this work has been to provide, within a consistent theoretical framework, a better understanding of vibrational spectroscopy and dynamics for water on the α-alumina(0001) surface,ranging from very low water coverage (similar to the UHV case) up to medium-high coverages, resembling the hydroxylated oxide in environmental moist conditions.
Risiken für Cyberressourcen können durch unbeabsichtigte oder absichtliche Bedrohungen entstehen. Dazu gehören Insider-Bedrohungen von unzufriedenen oder nachlässigen Mitarbeitern und Partnern, eskalierende und aufkommende Bedrohungen aus aller Welt, die stetige Weiterentwicklung der Angriffstechnologien und die Entstehung neuer und zerstörerischer Angriffe. Informationstechnik spielt mittlerweile in allen Bereichen des Lebens eine entscheidende Rolle, u. a. auch im Bereich des Militärs. Ein ineffektiver Schutz von Cyberressourcen kann hier Sicherheitsvorfälle und Cyberattacken erleichtern, welche die kritischen Vorgänge stören, zu unangemessenem Zugriff, Offenlegung, Änderung oder Zerstörung sensibler Informationen führen und somit die nationale Sicherheit, das wirtschaftliche Wohlergehen sowie die öffentliche Gesundheit und Sicherheit gefährden. Oftmals ist allerdings nicht klar, welche Bedrohungen konkret vorhanden sind und welche der kritischen Systemressourcen besonders gefährdet ist.
In dieser Dissertation werden verschiedene Analyseverfahren für Bedrohungen in militärischer Informationstechnik vorgeschlagen und in realen Umgebungen getestet. Dies bezieht sich auf Infrastrukturen, IT-Systeme, Netze und Anwendungen, welche Verschlusssachen (VS)/Staatsgeheimnisse verarbeiten, wie zum Beispiel bei militärischen oder Regierungsorganisationen. Die Besonderheit an diesen Organisationen ist das Konzept der Informationsräume, in denen verschiedene Datenelemente, wie z. B. Papierdokumente und Computerdateien, entsprechend ihrer Sicherheitsempfindlichkeit eingestuft werden, z. B. „STRENG GEHEIM“, „GEHEIM“, „VS-VERTRAULICH“, „VS-NUR-FÜR-DEN-DIENSTGEBRAUCH“ oder „OFFEN“.
Die Besonderheit dieser Arbeit ist der Zugang zu eingestuften Informationen aus verschiedenen Informationsräumen und der Prozess der Freigabe dieser. Jede in der Arbeit entstandene Veröffentlichung wurde mit Angehörigen in der Organisation besprochen, gegengelesen und freigegeben, so dass keine eingestuften Informationen an die Öffentlichkeit gelangen.
Die Dissertation beschreibt zunächst Bedrohungsklassifikationsschemen und Angreiferstrategien, um daraus ein ganzheitliches, strategiebasiertes Bedrohungsmodell für Organisationen abzuleiten. Im weiteren Verlauf wird die Erstellung und Analyse eines Sicherheitsdatenflussdiagramms definiert, welches genutzt wird, um in eingestuften Informationsräumen operationelle Netzknoten zu identifizieren, die aufgrund der Bedrohungen besonders gefährdet sind. Die spezielle, neuartige Darstellung ermöglicht es, erlaubte und verbotene Informationsflüsse innerhalb und zwischen diesen Informationsräumen zu verstehen.
Aufbauend auf der Bedrohungsanalyse werden im weiteren Verlauf die Nachrichtenflüsse der operationellen Netzknoten auf Verstöße gegen Sicherheitsrichtlinien analysiert und die Ergebnisse mit Hilfe des Sicherheitsdatenflussdiagramms anonymisiert dargestellt. Durch Anonymisierung der Sicherheitsdatenflussdiagramme ist ein Austausch mit externen Experten zur Diskussion von Sicherheitsproblematiken möglich.
Der dritte Teil der Arbeit zeigt, wie umfangreiche Protokolldaten der Nachrichtenflüsse dahingehend untersucht werden können, ob eine Reduzierung der Menge an Daten möglich ist. Dazu wird die Theorie der groben Mengen aus der Unsicherheitstheorie genutzt. Dieser Ansatz wird in einer Fallstudie, auch unter Berücksichtigung von möglichen auftretenden Anomalien getestet und ermittelt, welche Attribute in Protokolldaten am ehesten redundant sind.
Die HPI Schul-Cloud
(2019)
Die digitale Transformation durchdringt alle gesellschaftlichen Ebenen und Felder, nicht zuletzt auch das Bildungssystem. Dieses ist auf die Veränderungen kaum vorbereitet und begegnet ihnen vor allem auf Basis des Eigenengagements seiner Lehrer*innen. Strukturelle Reaktionen auf den Mangel an qualitativ hochwertigen Fortbildungen, auf schlecht ausgestattete Unterrichtsräume und nicht professionell gewartete Computersysteme gibt es erst seit kurzem. Doch auch wenn Beharrungskräfte unter Pädagog*innen verbreitet sind, erfordert die Transformation des Systems Schule auch eine neue Mentalität und neue Arbeits- und Kooperationsformen.
Zeitgemäßer Unterricht benötigt moderne Technologie und zeitgemäße IT-Architekturen. Nur Systeme, die für Lehrer*innen und Schüler*innen problemlos verfügbar, benutzerfreundlich zu bedienen und didaktisch flexibel einsetzbar sind, finden in Schulen Akzeptanz. Hierfür haben wir die HPI Schul-Cloud entwickelt. Sie ermöglicht den einfachen Zugang zu neuesten, professionell gewarteten Anwendungen, verschiedensten digitalen Medien, die Vernetzung verschiedener Lernorte und den rechtssicheren Einsatz von Kommunikations- und Kollaborationstools.
Die Entwicklung der HPI Schul-Cloud ist umso notwendiger, als dass rechtliche Anforderungen - insbesondere aus der Datenschutzgrundverordnung der EU herrührend - den Einsatz von Cloud-Anwendungen, die in der Arbeitswelt verbreitet sind, in Schulen unmöglich machen. Im Bildungsbereich verbreitete Anwendungen sind größtenteils technisch veraltet und nicht benutzerfreundlich.
Dies nötigt die Bundesländer zu kostspieligen Eigenentwicklungen mit Aufwänden im zweistelligen Millionenbereich - Projekte die teilweise gescheitert sind. Dank der modularen Micro-Service-Architektur können die Bundesländer zukünftig auf die HPI Schul-Cloud als technische Grundlage für ihre Eigen- oder Gemeinschaftsprojekte zurückgreifen. Hierfür gilt es, eine nachhaltige Struktur für die Weiterentwicklung der Open-Source-Software HPI Schul-Cloud zu schaffen.
Dieser Bericht beschreibt den Entwicklungsstand und die weiteren Perspektiven des Projekts HPI Schul-Cloud im Januar 2019. 96 Schulen deutschlandweit nutzen die HPI Schul-Cloud, bereitgestellt durch das Hasso-Plattner-Institut. Weitere 45 Schulen und Studienseminare nutzen die Niedersächsische Bildungscloud, die technisch auf der HPI Schul-Cloud basiert. Das vom Bundesministerium für Bildung und Forschung geförderte Projekt läuft in der gegenwärtigen Roll-Out-Phase bis zum 31. Juli 2021. Gemeinsam mit unserem Kooperationspartner MINT-EC streben wir an, die HPI Schul-Cloud möglichst an allen Schulen des Netzwerks einzusetzen.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
How does the international Rule of Law apply to constrain the conduct of the Executive within a constitutional State that adopts a dualist approach to the reception of international law? This paper argues that, so far from being inconsistent with the concept of the Rule of Law, the Executive within a dualist constitution has a self-enforcing obligation to abide by the obligations of the State under international law. This is not dependent on Parliament’s incorporation of treaty obligations into domestic law. It is the correlative consequence of the allocation to the Executive of the power to conduct foreign relations. The paper develops this argument in response to recent debate in the United Kingdom on whether Ministers have an obligation to comply with international law–a reference that the Government removed from the Ministerial Code. It shows that such an obligation is consistent with both four centuries of the practice of the British State and with principle.
International adjudication is currently under assault, encouraging a number of States to withdraw, or to consider withdrawing, from treaties providing for international dispute settlement. This Working Paper argues that the act of treaty withdrawal is not merely as the unilateral executive exercise of the individual sovereign prerogative of a State. International law places checks upon the exercise of withdrawal, recognising that it is an act that of its nature affects the interests of other States parties, which have a collective interest in constraining withdrawal. National courts have a complementary function in restraining unilateral withdrawal in order to support the domestic constitution. The arguments advanced against international adjudication in the name of popular democracy at the national level can serve as a cloak for the exercise of executive power unrestrained by law. The submission by States of their disputes to peaceful settlement through international adjudication is central, not incidental, to the successful operation of the international legal system.
Balancing foraging gain and predation risk is a fundamental trade-off in the life of animals. Individual strategies to acquire, process, store and use information to solve cognitive tasks are likely to affect speed and flexibility of learning, and ecologically relevant decisions regarding foraging and predation risk. Theory suggests a functional link between individual variation in cognitive style and behaviour (animal personality) via speed-accuracy and risk-reward trade-offs. We tested whether cognitive style and personality affect risk-reward trade-off decisions posed by foraging and predation risk. We exposed 21 bank voles (Myodes glareolus) that were bold, fast learning and inflexible and 18 voles that were shy, slow learning and flexible to outdoor enclosures with different risk levels at two food patches. We quantified individual food patch exploitation, foraging and vigilance behaviour. Although both types responded to risk, fast animals increasingly exploited both food patches, gaining access to more food and spending less time searching and exercising vigilance. Slow animals progressively avoided high-risk areas, concentrating foraging effort in the low-risk one, and devoting >50% of visit to vigilance. These patterns indicate that individual differences in cognitive style/personality are reflected in foraging and anti-predator decisions that underlie the individual risk-reward bias.
Alles auf Anfang! Für alle?
(2019)
Do stereotypes strike twice?
(2019)
Stereotypes influence teachers' perception of and behaviour towards students, thus shaping students' learning opportunities. The present study investigated how 315 Australian pre-service teachers' stereotypes about giftedness and gender are related to their perception of students' intellectual ability, adjustment, and social-emotional ability, using an experimental vignette approach and controlling for social desirability in pre-service teachers' responses. Repeated-measures ANOVA showed that pre-service teachers associated giftedness with higher intellectual ability, but with less adjustment compared to average-ability students. Furthermore, pre-service teachers perceived male students as less socially and emotionally competent and less adjusted than female students. Additionally, pre-service teachers seemed to perceive female average-ability students' adjustment as most favourable compared to male average-ability students and gifted students. Findings point to discrepancies between actual characteristics of gifted female and male students and stereotypes in teachers' beliefs. Consequences of stereotyping and implications for teacher education are discussed.
The habilitation deals with the numerical analysis of the recurrence properties of geological and climatic processes. The recurrence of states of dynamical processes can be analysed with recurrence plots and various recurrence quantification options. In the present work, the meaning of the structures and information contained in recurrence plots are examined and described. New developments have led to extensions that can be used to describe the recurring patterns in both space and time. Other important developments include recurrence plot-based approaches to identify abrupt changes in the system's dynamics, to detect and investigate external influences on the dynamics of a system, the couplings between different systems, as well as a combination of recurrence plots with the methodology of complex networks. Typical problems in geoscientific data analysis, such as irregular sampling and uncertainties, are tackled by specific modifications and additions. The development of a significance test allows the statistical evaluation of quantitative recurrence analysis, especially for the identification of dynamical transitions. Finally, an overview of typical pitfalls that can occur when applying recurrence-based methods is given and guidelines on how to avoid such pitfalls are discussed. In addition to the methodological aspects, the application potential especially for geoscientific research questions is discussed, such as the identification and analysis of transitions in past climates, the study of the influence of external factors to ecological or climatic systems, or the analysis of landuse dynamics based on remote sensing data.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
In active mountain belts with steep terrain, bedrock landsliding is a major erosional agent. In the Himalayas, landsliding is driven by annual hydro-meteorological forcing due to the summer monsoon and by rarer, exceptional events, such as earthquakes. Independent methods yield erosion rate estimates that appear to increase with sampling time, suggesting that rare, high-magnitude erosion events dominate the erosional budget. Nevertheless, until now, neither the contribution of monsoon and earthquakes to landslide erosion nor the proportion of erosion due to rare, giant landslides have been quantified in the Himalayas. We address these challenges by combining and analysing earthquake- and monsoon-induced landslide inventories across different timescales. With time series of 5 m satellite images over four main valleys in central Nepal, we comprehensively mapped landslides caused by the monsoon from 2010 to 2018. We found no clear correlation between monsoon properties and landsliding and a similar mean landsliding rate for all valleys, except in 2015, where the valleys
affected by the earthquake featured ∼ 5–8 times more landsliding than the pre-earthquake mean rate. The longterm size–frequency distribution of monsoon-induced landsliding (MIL) was derived from these inventories and from an inventory of landslides larger than ∼ 0.1 km 2 that occurred between 1972 and 2014. Using a published landslide inventory for the Gorkha 2015 earthquake, we derive the size–frequency distribution for earthquake-induced landsliding (EQIL). These two distributions are dominated by infrequent, large and giant landslides but under-predict an estimated Holocene frequency of giant landslides (> 1 km 3 ) which we derived from a literature compilation. This discrepancy can be resolved when modelling the effect of a full distribution of earthquakes of variable magnitude and when considering that a shallower earthquake may cause larger landslides. In this case, EQIL and MIL contribute about equally to a total long-term erosion of ∼ 2 ± 0.75 mm yr −1 in agreement with most thermo-chronological data. Independently of the specific total and relative erosion rates, the heavy-tailed size–frequency distribution from MIL and EQIL and the very large maximal landslide size in the Himalayas indicate that mean landslide erosion rates increase with sampling time, as has been observed for independent erosion estimates. Further, we find that the sampling timescale required to adequately capture the frequency of the largest landslides, which is necessary for deriving long-term mean erosion rates, is often much longer than the averaging time of cosmogenic 10 Be methods. This observation presents a strong caveat when interpreting spatial or temporal variability in erosion rates from this method. Thus, in areas where a very large, rare landslide contributes heavily to long-term erosion (as the Himalayas), we recommend 10 Be sample in catchments with source areas > 10 000 km 2 to reduce the method mean bias to below ∼ 20 % of the long-term erosion.
Business process management (BPM) deals with modeling, executing, monitoring, analyzing, and improving business processes. During execution, the process communicates with its environment to get relevant contextual information represented as events. Recent development of big data and the Internet of Things (IoT) enables sources like smart devices and sensors to generate tons of events which can be filtered, grouped, and composed to trigger and drive business processes.
The industry standard Business Process Model and Notation (BPMN) provides several event constructs to capture the interaction possibilities between a process and its environment, e.g., to instantiate a process, to abort an ongoing activity in an exceptional situation, to take decisions based on the information carried by the events, as well as to choose among the alternative paths for further process execution. The specifications of such interactions are termed as event handling. However, in a distributed setup, the event sources are most often unaware of the status of process execution and therefore, an event is produced irrespective of the process being ready to consume it. BPMN semantics does not support such scenarios and thus increases the chance of processes getting delayed or getting in a deadlock by missing out on event occurrences which might still be relevant.
The work in this thesis reviews the challenges and shortcomings of integrating real-world events into business processes, especially the subscription management. The basic integration is achieved with an architecture consisting of a process modeler, a process engine, and an event processing platform. Further, points of subscription and unsubscription along the process execution timeline are defined for different BPMN event constructs. Semantic and temporal dependencies among event subscription, event occurrence, event consumption and event unsubscription are considered. To this end, an event buffer with policies for updating the buffer, retrieving the most suitable event for the current process instance, and reusing the event has been discussed that supports issuing of early subscription.
The Petri net mapping of the event handling model provides our approach with a translation of semantics from a business process perspective. Two applications based on this formal foundation are presented to support the significance of different event handling configurations on correct process execution and reachability of a process path. Prototype implementations of the approaches show that realizing flexible event handling is feasible with minor extensions of off-the-shelf process engines and event platforms.
Der Porenraum eines Karbonatgesteins ist zumeist aus einer spezifischen Vergesellschaftung verschiedenster Porentypen aufgebaut, die eine unterschiedliche Herkunft aufweisen und zusätzlich in ihrer Form und Größe stark variieren können (e.g., Melim et al., 2001; Lee et al., 2009; He et al., 2014; Dernaika & Sinclair, 2017; Zhang et al., 2017). Diese für Karbonate typischen multimodalen Porensysteme entstehen sowohl durch primäre Ablagerungsprozesse, als auch durch mehrmalige Modifikation des Porenraumes nach Ablagerung des Sediments. Dies führt zu einer ungleichen Verteilung der Porenraumeigenschaften auf engstem Raum und das zeitgleiche Auftreten von effektiven und ineffektiven Poren. Diese immanenten Unterschiede in der Effektivität einzelner Porentypen sind der Hauptgrund für die häufig sehr niedrige Korellation zwischen Porosität und Permeabilität in Karbonaten (e.g., Mazzullo 2004; Ehrenberg & Nadeau, 2005; Hollis et al., 2010; He et al., 2014; Rashid et al., 2015; Dernaika & Sinclair, 2017). Durch die Extraktion von miteinander verbundenen und somit effektiven Porentypen jedoch kann das Verständnis und die Vorhersage der Permeabilität für einen gegeben Porositätswert stark verbessert werden (e.g., Melim et al., 2001; Zhang et al., 2017). Dazu wird in dieser Arbeit eine auf der digitalen Bildanalyse (DIA) beruhende Methode vorgestellt, mit der schrittweise die Effektivität von Poren aus den analysierten mittelmiozänen lakustrinen Karbonaten des Nördlinger Ries Kratersees (Süddeutschland) berechnet werden kann. Mithilfe des Porenformfaktors (sensu Anselmetti et al., 1998), der als Parameter zur Quantifizierung der Interkonnektivität zwischen Poren dient, wird der potentiellen Beitrag an Permeabilität jedes Porentyps zur Gesamtpermeabilität bestimmt. Somit können die effektivsten Porentypen innerhalb der analysierten Karbonate identifiziert werden. Desweiteren wird die digitale Bildanalyse dazu benutzt, zementierte Porenräume zu extrahieren, um den Einfluss der Zementation auf die Porenraumeigenschaften zu quantifizieren. Durch eine unabhängige Methode (Fluid-Flow-Simulation), deren Ergebnisse wiederum mit der digitalen Bildanalyse ausgewertet werden, können die vorherigen Erkentnisse bestätigt werden: Interpeloidale Poren und Lösungsporen sind die beiden effektivsten Porentypen im Porenraum der Riesseekarbonate. Die Extraktion des miteinander verbundenen (d.h. effektiven) Porennetzwerkes führt schließlich zu einer erheblich verbesserten Korrelation zwischen Porosität und Permeabilität in den analysierten Karbonaten. Die in dieser Arbeit beschriebene Methode bietet ein quantitatives petrographisches Werkzeug, mit dessen Hilfe die effektive Porosität eines Porenraumes extrahiert werden kann. Dies führt zu einem besseren Verständnis darüber, wie Porensysteme von Karbonaten Permeabilität erzeugen. Diese Dissertation zeigt auch, dass die Formkomplexität von Poren einer der wichtigsten Parameter ist, der die Interkonnektivität zwischen einzelnen Poren und somit die Entstehung von effektiver Porosität steuert. Außerdem erweist sich die digitale Bildanalyse als ausgezeichnetes Werkzeug um die Porosität und Permeabilität direkt an ihren gemeinsamen Ursprung zu knüpfen: die Gesteinstextur und die damit assoziierte Porenstruktur.
Interfacial properties of morpholine-2,5-dione-based oligodepsipeptides and multiblock copolymers
(2019)
Oligodepsipeptides (ODPs) with alternating amide and ester bonds prepared by ring-opening polymerization of morpholine-2,5-dione derivatives are promising matrices for drug delivery systems and building blocks for multifunctional biomaterials. Here, we elucidate the behavior of three telechelic ODPs and one multiblock copolymer containing ODP blocks at the air-water interface. Surprisingly, whereas the oligomers and multiblock copolymers crystallize in bulk, no crystallization is observed at the air-water interface. Furthermore, polarization modulation infrared reflection absorption spectroscopy is used to elucidate hydrogen bonding and secondary structures in ODP monolayers. The results will direct the development of the next ODP-based biomaterial generation with tailored properties for highly sophisticated applications.
Biochar is being discussed as a soil amendment to improve soil fertility and mitigate climate change. While biochar interactions with soil microbial biota have been frequently studied, interactions with soil mesofauna are understudied. We here present an experiment in which we tested if the collembolan Folsomia candida I) can transport biochar particles, II) if yes, how far the particles are distributed within 10 days, and III) if it shows a preference among biochars made from different feedstocks, i.e. pine wood, pine bark and spelt husks. In general, biochar particles based on pine bark and pine wood were consistently distributed significantly more than those made of spelt husks, but all types were transported more than 4cm within 10 days. Additionally, we provide evidence that biochar particles can become readily attached to the cuticle of collembolans and hence be transported, potentially even over large distances. Our study shows that the soil mesofauna can indeed act as a vector for the transport of biochar particles and show clear preferences depending on the respective feedstock, which would need to be studied in more detail in the future.
The natural abundance of Coiled Coil (CC) motifs in cytoskeleton and extracellular matrix proteins suggests that CCs play an important role as passive (structural) and active (regulatory) mechanical building blocks. CCs are self-assembled superhelical structures consisting of 2-7 α-helices. Self-assembly is driven by hydrophobic and ionic interactions, while the helix propensity of the individual helices contributes additional stability to the structure. As a direct result of this simple sequence-structure relationship, CCs serve as templates for protein design and sequences with a pre-defined thermodynamic stability have been synthesized de novo. Despite this quickly increasing knowledge and the vast number of possible CC applications, the mechanical function of CCs has been largely overlooked and little is known about how different CC design parameters determine the mechanical stability of CCs. Once available, this knowledge will open up new applications for CCs as nanomechanical building blocks, e.g. in biomaterials and nanobiotechnology.
With the goal of shedding light on the sequence-structure-mechanics relationship of CCs, a well-characterized heterodimeric CC was utilized as a model system. The sequence of this model system was systematically modified to investigate how different design parameters affect the CC response when the force is applied to opposing termini in a shear geometry or separated in a zipper-like fashion from the same termini (unzip geometry). The force was applied using an atomic force microscope set-up and dynamic single-molecule force spectroscopy was performed to determine the rupture forces and energy landscape properties of the CC heterodimers under study. Using force as a denaturant, CC chain separation is initiated by helix uncoiling from the force application points. In the shear geometry, this allows uncoiling-assisted sliding parallel to the force vector or dissociation perpendicular to the force vector. Both competing processes involve the opening of stabilizing hydrophobic (and ionic) interactions. Also in the unzip geometry, helix uncoiling precedes the rupture of hydrophobic contacts.
In a first series of experiments, the focus was placed on canonical modifications in the hydrophobic core and the helix propensity. Using the shear geometry, it was shown that both a reduced core packing and helix propensity lower the thermodynamic and mechanical stability of the CC; however, with different effects on the energy landscape of the system. A less tightly packed hydrophobic core increases the distance to the transition state, with only a small effect on the barrier height. This originates from a more dynamic and less tightly packed core, which provides more degrees of freedom to respond to the applied force in the direction of the force vector. In contrast, a reduced helix propensity decreases both the distance to the transition state and the barrier height. The helices are ‘easier’ to unfold and the remaining structure is less thermodynamically stable so that dissociation perpendicular to the force axis can occur at smaller deformations.
Having elucidated how canonical sequence modifications influence CC mechanics, the pulling geometry was investigated in the next step. Using one and the same sequence, the force application points were exchanged and two different shear and one unzipping geometry were compared. It was shown that the pulling geometry determines the mechanical stability of the CC. Different rupture forces were observed in the different shear as well as in the unzipping geometries, suggesting that chain separation follows different pathways on the energy landscape. Whereas the difference between CC shearing and unzipping was anticipated and has also been observed for other biological structures, the observed difference for the two shear geometries was less expected. It can be explained with the structural asymmetry of the CC heterodimer. It is proposed that the direction of the α-helices, the different local helix propensities and the position of a polar asparagine in the hydrophobic core are responsible for the observed difference in the chain separation pathways. In combination, these factors are considered to influence the interplay between processes parallel and perpendicular to the force axis.
To obtain more detailed insights into the role of helix stability, helical turns were reinforced locally using artificial constraints in the form of covalent and dynamic ‘staples’. A covalent staple bridges to adjacent helical turns, thus protecting them against uncoiling. The staple was inserted directly at the point of force application in one helix or in the same terminus of the other helix, which did not experience the force directly. It was shown that preventing helix uncoiling at the point of force application reduces the distance to the transition state while slightly increasing the barrier height. This confirms that helix uncoiling is critically important for CC chain separation. When inserted into the second helix, this stabilizing effect is transferred across the hydrophobic core and protects the force-loaded turns against uncoiling. If both helices were stapled, no additional increase in mechanical stability was observed. When replacing the covalent staple with a dynamic metal-coordination bond, a smaller decrease in the distance to the transition was observed, suggesting that the staple opens up while the CC is under load.
Using fluorinated amino acids as another type of non-natural modification, it was investigated how the enhanced hydrophobicity and the altered packing at the interface influences CC mechanics. The fluorinated amino acid was inserted into one central heptad of one or both α-helices. It was shown that this substitution destabilized the CC thermodynamically and mechanically. Specifically, the barrier height was decreased and the distance to the transition state increased. This suggests that a possible stabilizing effect of the increased hydrophobicity is overruled by a disturbed packing, which originates from a bad fit of the fluorinated amino acid into the local environment. This in turn increases the flexibility at the interface, as also observed for the hydrophobic core substitution described above. In combination, this confirms that the arrangement of the hydrophobic side chains is an additional crucial factor determining the mechanical stability of CCs.
In conclusion, this work shows that knowledge of the thermodynamic stability alone is not sufficient to predict the mechanical stability of CCs. It is the interplay between helix propensity and hydrophobic core packing that defines the sequence-structure-mechanics relationship. In combination, both parameters determine the relative contribution of processes parallel and perpendicular to the force axis, i.e. helix uncoiling and uncoiling-assisted sliding as well as dissociation. This new mechanistic knowledge provides insight into the mechanical function of CCs in tissues and opens up the road for designing CCs with pre-defined mechanical properties. The library of mechanically characterized CCs developed in this work is a powerful starting point for a wide spectrum of applications, ranging from molecular force sensors to mechanosensitive crosslinks in protein nanostructures and synthetic extracellular matrix mimics.
Lanthanide-doped upconverting nanoparticles (UCNP) are being extensively studied for bioapplications due to their unique photoluminescence properties and low toxicity. Interest in RET applications involving UCNP is also increasing, but due to factors such as large sizes, ion emission distributions within the particles, and complicated energy transfer processes within the UCNP, there are still many questions to be answered. In this study, four types of core and core-shell NaYF4-based UCNP co-doped with Yb3+ and Tm3+ as sensitizer and activator, respectively, were investigated as donors for the Methyl 5-(8-decanoylbenzo[1,2-d:4,5-d ']bis([1,3]dioxole)-4-yl)-5-oxopentanoate (DBD-6) dye. The possibility of resonance energy transfer (RET) between UCNP and the DBD-6 attached to their surface was demonstrated based on the comparison of luminescence intensities, band ratios, and decay kinetics. The architecture of UCNP influenced both the luminescence properties and the energy transfer to the dye: UCNP with an inert shell were the brightest, but their RET efficiency was the lowest (17%). Nanoparticles with Tm3+ only in the shell have revealed the highest RET efficiencies (up to 51%) despite the compromised luminescence due to surface quenching.
Meta‐communities of habitat islands may be essential to maintain biodiversity in anthropogenic landscapes allowing rescue effects in local habitat patches. To understand the species‐assembly mechanisms and dynamics of such ecosystems, it is important to test how local plant‐community diversity and composition is affected by spatial isolation and hence by dispersal limitation and local environmental conditions acting as filters for local species sorting. We used a system of 46 small wetlands (kettle holes)—natural small‐scale freshwater habitats rarely considered in nature conservation policies—embedded in an intensively managed agricultural matrix in northern Germany. We compared two types of kettle holes with distinct topographies (flatsloped, ephemeral, frequently plowed kettle holes vs. steep‐sloped, more permanent ones) and determined 254 vascular plant species within these ecosystems, as well as plant functional traits and nearest neighbor distances to other kettle holes. Differences in alpha and beta diversity between steep permanent compared with ephemeral flat kettle holes were mainly explained by species sorting and niche processes and mass effect processes in ephemeral flat kettle holes. The plant‐community composition as well as the community trait distribution in terms of life span, breeding system, dispersal ability, and longevity of seed banks significantly differed between the two habitat types. Flat ephemeral kettle holes held a higher percentage of non‐perennial plants with a more persistent seed bank, less obligate outbreeders and more species with seed dispersal abilities via animal vectors compared with steep‐sloped, more permanent kettle holes that had a higher percentage of wind‐dispersed species. In the flat kettle holes, plant‐species richness was negatively correlated with the degree of isolation, whereas no such pattern was found for the permanent kettle holes. Synthesis: Environment acts as filter shaping plant diversity (alpha and beta) and plant‐community trait distribution between steep permanent compared with ephemeral flat kettle holes supporting species sorting and niche mechanisms as expected, but we identified a mass effect in ephemeral kettle holes only. Flat ephemeral kettle holes can be regarded as meta‐ecosystems that strongly depend on seed dispersal and recruitment from a seed bank, whereas neighboring permanent kettle holes have a more stable local species diversity.
Species assembly from a regional pool into local metacommunities and how they colonize and coexist over time and space is essential to understand how communities response to their environment including abiotic and biotic factors. In highly disturbed landscapes, connectivity of isolated habitat patches is essential to maintain biodiversity and the entire ecosystem functioning. In northeast Germany, a high density of the small water bodies called kettle holes, are good systems to study metacommunities due to their condition as “aquatic islands” suitable for hygrophilous species that are surrounded by in unsuitable matrix of crop fields. The main objective of this thesis was to infer the main ecological processes shaping plant communities and their response to the environment, from biodiversity patterns and key life-history traits involved in connectivity using ecological and genetic approaches; and to provide first insights of the role of kettle holes harboring wild-bee species as important mobile linkers connecting plant communities in this insular system.
t a community level, I compared plant diversity patterns and trait composition in ephemeral vs. permanent kettle holes). My results showed that types of kettle holes act as environmental filers shaping plant diversity, community-composition and trait-distribution, suggesting species sorting and niche processes in both types of kettle holes. At a population level, I further analyzed the role of dispersal and reproductive strategies of four selected species occurring in permanent kettle holes. Using microsatellites, I found that breeding system (degree of clonality), is the main factor shaping genetic diversity and genetic divergence. Although, higher gene flow and lower genetic differentiation among populations in wind vs. insect pollinated species was also found, suggesting that dispersal mechanisms played a role related to gene flow and connectivity. For most flowering plants, pollinators play an important role connecting communities. Therefore, as a first insight of the potential mobile linkers of these plant communities, I investigated the diversity wild-bees occurring in these kettle holes. My main results showed that local habitat quality (flower resources) had a positive effect on bee diversity, while habitat heterogeneity (number of natural landscape elements surrounding kettle holes 100–300m), was negatively correlated.
This thesis covers from genetic flow at individual and population level to plant community assembly. My results showed how patterns of biodiversity, dispersal and reproduction strategies in plant population and communities can be used to infer ecological processes. In addition, I showed the importance of life-history traits and the relationship between species and their abiotic and biotic interactions. Furthermore, I included a different level of mobile linkers (pollinators) for a better understanding of another level of the system. This integration is essential to understand how communities respond to their surrounding environment and how disturbances such as agriculture, land-use and climate change might affect them. I highlight the need to integrate many scientific areas covering from genes to ecosystems at different spatiotemporal scales for a better understanding, management and conservation of our ecosystems.
Geomagnetic paleosecular variations (PSVs) are an expression of geodynamo processes inside the Earth’s liquid outer core. These paleomagnetic time series provide insights into the properties of the Earth’s magnetic field, from normal behavior with a dominating dipolar geometry, over field crises, such as pronounced intensity lows and geomagnetic excursions with a distorted field geometry, to the complete reversal of the dominating dipole contribution. Particularly, long-term high-resolution and high-quality PSV time series are needed for properly reconstructing the higher frequency components in the spectrum of geomagnetic field variations and for a better understanding of the effects of smoothing during the recording of such paleomagnetic records by sedimentary archives.
In this doctorate study, full vector paleomagnetic records were derived from 16 sediment cores recovered from the southeastern Black Sea. Age models are based on radiocarbon dating and correlations of warming/cooling cycles monitored by high-resolution X-ray fluorescence (XRF) elementary ratios as well as ice-rafted debris (IRD) in Black Sea sediments to the sequence of ‘Dansgaard-Oeschger’ (DO) events defined from Greenland ice core oxygen isotope stratigraphy.
In order to identify the carriers of magnetization in Black Sea sediments, core MSM33-55-1 recovered from the southeast Black Sea was subjected to detailed rock magnetic and electron microscopy investigations. The younger part of core MSM33-55-1 was continuously deposited since 41 ka. Before 17.5 ka, the magnetic minerals were dominated by a mixture of greigite (Fe3S4) and titanomagnetite (Fe3-xTixO4) in samples with SIRM/κLF >10 kAm-1, or exclusively by titanomagnetite in samples with SIRM/κLF ≤10 kAm-1. It was found that greigite is generally present as crustal aggregates in locally reducing micro-environments. From 17.5 ka to 8.3 ka, the dominant magnetic mineral in this transition phase was changing from greigite (17.5 – ~10.0 ka) to probably silicate-hosted titanomagnetite (~10.0 – 8.3 ka). After 8.3 ka, the anoxic Black Sea was a favorable environment for the formation of non-magnetic pyrite (FeS2) framboids.
Aiming to avoid compromising of paleomagnetic data by erroneous directions carried by greigite, paleomagnetic data from samples with SIRM/κLF >10 kAm-1, shown to contain greigite by various methods, were removed from obtained records. Consequently, full vector paleomagnetic records, comprising directional data and relative paleointensity (rPI), were derived only from samples with SIRM/κLF ≤10 kAm-1 from 16 Black Sea sediment cores. The obtained data sets were used to create a stack covering the time window between 68.9 and 14.5 ka with temporal resolution between 40 and 100 years, depending on sedimentation rates.
At 64.5 ka, according to obtained results from Black Sea sediments, the second deepest minimum in relative paleointensity during the past 69 ka occurred. The field minimum during MIS 4 is associated with large declination swings beginning about 3 ka before the minimum. While a swing to 50°E is associated with steep inclinations (50-60°) according to the coring site at 42°N, the subsequent declination swing to 30°W is associated with shallow inclinations of down to 40°. Nevertheless, these large deviations from the direction of a geocentric axial dipole field (I=61°, D=0°) still can not yet be termed as 'excursional', since latitudes of corresponding VGPs only reach down to 51.5°N (120°E) and 61.5°N (75°W), respectively. However, these VGP positions at opposite sides of the globe are linked with VGP drift rates of up to 0.2° per year in between. These extreme secular variations might be the mid-latitude expression of the Norwegian–Greenland Sea excursion found at several sites much further North in Arctic marine sediments between 69°N and 81°N.
At about 34.5 ka, the Mono Lake excursion is evidenced in the stacked Black Sea PSV record by both a rPI minimum and directional shifts. Associated VGPs from stacked Black Sea data migrated from Alaska, via central Asia and the Tibetan Plateau, to Greenland, performing a clockwise loop. This agrees with data recorded in the Wilson Creek Formation, USA., and Arctic sediment core PS2644-5 from the Iceland Sea, suggesting a dominant dipole field. On the other hand, the Auckland lava flows, New Zealand, the Summer Lake, USA., and Arctic sediment core from ODP Site-919 yield distinct VGPs located in the central Pacific Ocean due to a presumably non-dipole (multi-pole) field configuration.
A directional anomaly at 18.5 ka, associated with pronounced swings in inclination and declination, as well as a low in rPI, is probably contemporaneous with the Hilina Pali excursion, originally reported from Hawaiian lava flows. However, virtual geomagnetic poles (VGPs) calculated from Black Sea sediments are not located at latitudes lower than 60° N, which denotes normal, though pronounced secular variations. During the postulated Hilina Pali excursion, the VGPs calculated from Black Sea data migrated clockwise only along the coasts of the Arctic Ocean from NE Canada (20.0 ka), via Alaska (18.6 ka) and NE Siberia (18.0 ka) to Svalbard (17.0 ka), then looping clockwise through the Eastern Arctic Ocean.
In addition to the Mono Lake and the Norwegian–Greenland Sea excursions, the Laschamp excursion was evidenced in the Black Sea PSV record with the lowest paleointensities at about 41.6 ka and a short-term (~500 years) full reversal centered at 41 ka. These excursions are further evidenced by an abnormal PSV index, though only the Laschamp and the Mono Lake excursions exhibit excursional VGP positions. The stacked Black Sea paleomagnetic record was also converted into one component parallel to the direction expected from a geocentric axial dipole (GAD) and two components perpendicular to it, representing only non-GAD components of the geomagnetic field. The Laschamp and the Norwegian–Greenland Sea excursions are characterized by extremely low GAD components, while the Mono Lake excursion is marked by large non-GAD contributions. Notably, negative values of the GAD component, indicating a fully reversed geomagnetic field, are observed only during the Laschamp excursion.
In summary, this doctoral thesis reconstructed high-resolution and high-fidelity PSV records from SE Black Sea sediments. The obtained record comprises three geomagnetic excursions, the Norwegian–Greenland Sea excursion, the Laschamp excursion, and the Mono Lake excursion. They are characterized by abnormal secular variations of different amplitudes centered at about 64.5 ka, 41.0 ka and 34.5 ka, respectively. In addition, the obtained PSV record from the Black Sea do not provide evidence for the postulated 'Hilina Pali excursion' at about 18.5 ka. Anyway, the obtained Black Sea paleomagnetic record, covering field fluctuations from normal secular variations, over excursions, to a short but full reversal, points to a geomagnetic field characterized by a large dynamic range in intensity and a highly variable superposition of dipole and non-dipole contributions from the geodynamo during the past 68.9 to 14.5 ka.
Due to the enhanced electromagnetic field at the tips of metal nanoparticles, the spiked structure of gold nanostars (AuNSs) is promising for surface-enhanced Raman scattering (SERS). Therefore, the challenge is the synthesis of well designed particles with sharp tips. The influence of different surfactants, i.e., dioctyl sodium sulfosuccinate (AOT), sodium dodecyl sulfate (SDS), and benzylhexadecyldimethylammonium chloride (BDAC), as well as the combination of surfactant mixtures on the formation of nanostars in the presence of Ag⁺ ions and ascorbic acid was investigated. By varying the amount of BDAC in mixed micelles the core/spike-shell morphology of the resulting AuNSs can be tuned from small cores to large ones with sharp and large spikes. The concomitant red-shift in the absorption toward the NIR region without losing the SERS enhancement enables their use for biological applications and for time-resolved spectroscopic studies of chemical reactions, which require a permanent supply with a fresh and homogeneous solution. HRTEM micrographs and energy-dispersive X-ray (EDX) experiments allow us to verify the mechanism of nanostar formation according to the silver underpotential deposition on the spike surface in combination with micelle adsorption.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
Unfolding the history of one of the oldest human val-ues, the freedom of expression, while defining its limits, is a complicated task. Does freedom stop where hate starts? This very old dilemma is -now more than ever before- revealing new dimensions. Politicians and new laws aim at regulating free expression, while disagree-ments over such regulation gradually become a source of endless conflict in newly formed multicultural, inter-connected, and digitized societies. The example of the Network Enforcement Act is used to understand the idea of restrictive legal practices in Germany, but also to enlighten the fact that law is a human construction which was created in order to regulate communication among individuals. Alternative practices, to straight legal ones, are summarized to show other dimensions of regulating hate speech without involving top-down approaches. The article proposes the approach of re-storative justice as a combination of legal and medita-tive practices in cases of hate speech. One advantage of the restorative justice approach elaborated in this arti-cle is the potential to remedy the inner hate and the pain, both of the victim and perpetrator. Finally, reveal-ing parts of history and new aspects of the ‘hate speech-puzzle’, leads to a questioning of contemporary social structures that possibly generate hate itself.
Human Rights Lawyering
(2019)
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
Cette thèse d’urbanisme s’est donnée pour objectif de réfléchir à l’avenir des gares métropolitaines françaises et allemandes à horizon 2050. Elle porte une interrogation sur les fondements de la gare comme objet urbain conceptuel (abordé comme un système) et pose comme hypothèse qu’il serait en quelque sorte doté de propriétés autonomes. Parmi ces propriétés, c’est le processus d’expansion et de dialogue sans cesse renouvelé et conflictuel, entre la gare et son tissu urbain environnant, qui guide cette recherche ; notamment dans le rapport qu’il entretient avec l’hypermobilité des métropoles. Pour ce faire, cette thèse convoque quatre terrains d’études : les gares principales de Cologne et de Stuttgart en Allemagne et les gares de Paris-Montparnasse et Lyon-Part-Dieu en France ; et commence par un historique détaillé de leurs évolutions morphologiques, pour dégager une série de variables architectoniques et urbaines. Il procède dans un deuxième temps à une série d’analyse prospective, permettant de juger de l’influence possible des politiques publiques en matière transports et de mobilité, sur l’avenir conceptuel des gares. Cette thèse propose alors le concept de système-gare, pour décrire l’expansion et l’intégration des gares métropolitaines avec leur environnement urbain ; un processus de négociation dialectique qui ne trouve pas sa résolution dans le concept de gare comme lieu de vie/ville. Elle invite alors à penser la gare comme une hétérotopie, et propose une lecture dépolarisée et déhiérarchisée de ces espaces, en introduisant les concepts d’orchestre de gares et de métagare. Cette recherche propose enfin une lecture critique de la « ville numérique » et du concept de « mobilité comme service. » Pour éviter une mise en flux tendus potentiellement dommageables, l’application de ces concepts en gare ne pourra se soustraire à une augmentation simultanée des espaces physiques.
Women are often underrepresented in math-intensive fields like the physical sciences, technology, engineering and mathematics. By comparison, boys relative to girls are less likely to strive for jobs in social and human-services domains. Relatively few studies have considered that intra-individual comparisons across domains may contribute to gendered occupational choices. This study examines whether girls’ and boys’ motivational beliefs in mathematics and language arts are predictive of their career plans in these fields. The study focusses on same domain and cross-domain effects and investigates bidirectional relations between motivational beliefs and career plans. Data for this study stem from 1,117 ninth and tenth graders (53.2% girls) from secondary schools in Berlin, Germany. Findings show systematic gender differences in samedomain effects in mathematics: girls’ comparatively lower mathematics self-concept and intrinsic value predicted a lower likelihood of striving for a math-related career. Crossdomain effects were not related to gender-specific career plans, with only one exception. Girls’ lower levels of intrinsic value in mathematics corresponded to a higher likelihood of striving for a career in language-related fields, which subsequently predicted lower levels of intrinsic value in mathematics. This finding points to a need to address both genderspecific motivational beliefs and gender-specific career plans in school when aiming to enhance more gender equality in girls’ and boys’ occupational choices.
Floods are among the most costly natural hazards that affect Europe and Germany, demanding a continuous adaptation of flood risk management. While social and economic development in recent years altered the flood risk patterns mainly with regard to an increase in flood exposure, different flood events are further expected to increase in frequency and severity in certain European regions due to climate change. As a result of recent major flood events in Germany, the German flood risk management shifted to more integrated approaches that include private precaution and preparation to reduce the damage on exposed assets. Yet, detailed insights into the preparedness decisions of flood-prone households remain scarce, especially in connection to mental impacts and individual coping strategies after being affected by different flood types.
This thesis aims to gain insights into flash floods as a costly hazard in certain German regions and compares the damage driving factors to the damage driving factors of river floods. Furthermore, psychological impacts as well as the effects on coping and mitigation behaviour of flood-affected households are assessed. In this context, psychological models such as the Protection Motivation Theory (PMT) and methods such as regressions and Bayesian statistics are used to evaluate influencing factors on the mental coping after an event and to identify psychological variables that are connected to intended private flood mitigation. The database consists of surveys that were conducted among affected households after major river floods in 2013 and flash floods in 2016.
The main conclusions that can be drawn from this thesis reveal that the damage patterns and damage driving factors of strong flash floods differ significantly from those of river floods due to a rapid flow origination process, higher flow velocities and flow forces. However, the effects on mental coping of people that have been affected by flood events appear to be weakly influenced by different flood types, but yet show a coherence to the event severity, where often thinking of the respective event is pronounced and also connected to a higher mitigation motivation. The mental coping and preparation after floods is further influenced by a good information provision and a social environment, which encourages a positive attitude towards private mitigation.
As an overall recommendation, approaches for an integrated flood risk management in Germany should be followed that also take flash floods into account and consider psychological characteristics of affected households to support and promote private flood mitigation. Targeted information campaigns that concern coping options and discuss current flood risks are important to better prepare for future flood hazards in Germany.
Background: The distribution of pronouns varies cross-linguistically. This distribution has led to conflicting results in studies that investigated pronoun resolution in agrammatic indviduals. In the investigation of pronominal resolution, the linguistic phenomenon of "resumption" is understudied in agrammatism. The construction of pronominal resolution in Akan presents the opportunity to thoroughly examine resumption. Aims: To start, the present study examines the production of (pronominal) resumption in Akan focus constructions (who-questions and focused declaratives). Second, we explore the effect of grammatical tone on the processing of pronominal (resumption) since Akan is a tonal language. Methods & Procedures: First, we tested the ability to distinguish linguistic and non-linguistic tone in Akan agrammatic speakers. Then, we administered an elicitation task to five Akan agrammatic individuals, controlling for the structural variations in the realization of resumption: focused who-questions and declaratives with (i) only a resumptive pronoun, (ii) only a clause determiner, (iii) a resumptive pronoun and a clause determiner co-occurring, and (iv) neither a resumptive pronoun nor a clause determiner. Outcomes & Results: Tone discrimination .both for pitch and for lexical tone was unimpaired. The production task demonstrated that the production of resumptive pronouns and clause determiners was intact. However, the production of declarative sentences in derived word order was impaired; wh-object questions were relatively well-preserved. Conclusions: We argue that the problems with sentence production are highly selective: linguistic tones and resumption are intact but word order is impaired in non-canonical declarative sentences.
Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun’s antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words.
This article offers an in-depth analysis of one particular type of meta-talk. It looks at how speakers use the meta-pragmatic claim to have previously communicated ('said' or 'meant') the same as, or the equivalent of, what their interlocutor just said. Through detailed sequential analyses, it is shown that this claim is frequently used as a practice for disarming disaffiliative responses and thus to manage (and often resolve) incipient disagreement. Besides unpacking the precise mechanisms underlying this practice, the paper also takes stock of the various (and partly variable) lexico-morpho-syntactic, prosodic and bodily-visual elements of conduct that recurrently enter into its composition. Since the practice essentially rests on the speaker’s insinuation of having been misunderstood by their co-participant, its relationship to the organization of repair will also be discussed. It is argued that the practice operates precisely at the intersection of stance-management (agreement/disagreement) and repair, and that it exhibits features which reflect this intersectional character. Data are in English.
Background: Although clinical supervision is considered to be a major component of the development and maintenance of psychotherapeutic competencies, and despite an increase in supervision research, the empirical evidence on the topic remains sparse.
Methods: Because most previous reviews lack methodological rigor, we aimed to review the status and quality of the empirical literature on clinical supervision, and to provide suggestions for future research. MEDLINE, PsycInfo and the Web of Science Core Collection were searched and the review was conducted according to current guidelines. From the review results, we derived suggestions for future research on clinical supervision.
Results: The systematic literature search identified 19 publications from 15 empirical studies. Taking into account the review results, the following suggestions for further research emerged: Supervision research would benefit from proper descriptions of how studies are conducted according to current guidelines, more methodologically rigorous empirical studies, the investigation of active supervision interventions, from taking diverse outcome domains into account, and from investigating supervision from a meta-theoretical perspective.
Conclusions: In all, the systematic review supported the notion that supervision research often lags behind psychotherapy research in general. Still, the results offer detailed starting points for further supervision research.
Lake sediments are increasingly explored as reliable paleoflood archives. In addition to established flood proxies including detrital layer thickness, chemical composition, and grain size, we explore stable oxygen and carbon isotope data as paleoflood proxies for lakes in catchments with carbonate bedrock geology. In a case study from Lake Mondsee (Austria), we integrate high-resolution sediment trapping at a proximal and a distal location and stable isotope analyses of varved lake sediments to investigate flood-triggered detrital sediment flux. First, we demonstrate a relation between runoff, detrital sediment flux, and isotope values in the sediment trap record covering the period 2011-2013 CE including 22 events with daily (hourly) peak runoff ranging from 10 (24) m(3) s(-1) to 79 (110) m(3) s(-1). The three- to ten-fold lower flood-triggered detrital sediment deposition in the distal trap is well reflected by attenuated peaks in the stable isotope values of trapped sediments. Next, we show that all nine flood-triggered detrital layers deposited in a sediment record from 1988 to 2013 have elevated isotope values compared with endogenic calcite. In addition, even two runoff events that did not cause the deposition of visible detrital layers are distinguished by higher isotope values. Empirical thresholds in the isotope data allow estimation of magnitudes of the majority of floods, although in some cases flood magnitudes are overestimated because local effects can result in too-high isotope values. Hence we present a proof of concept for stable isotopes as reliable tool for reconstructing flood frequency and, although with some limitations, even for flood magnitudes.
Organic semiconductors are a promising class of materials. Their special properties are the particularly good absorption, low weight and easy processing into thin films. Therefore, intense research has been devoted to the realization of thin film organic solar cells (OPVs). Because of the low dielectric constant of organic semiconductors, primary excitations (excitons) are strongly bound and a type II heterojunction needs to be introduced to split these excitations into free charges. Therefore, most organic solar cells consist of at least an electron donor and electron acceptor material. For such donor acceptor systems mainly three states are relevant; the photoexcited exciton on the donor or acceptor material, the charge transfer state at the donor-acceptor interface and the charge separated state of a free electron and hole. The interplay between these states significantly determines the efficiency of organic solar cells. Due to the high absorption and the low charge carrier mobilities, the active layers are usually thin but also, exciton dissociation and free charge formation proceeds rapidely, which makes the study of carrier dynamics highly challenging.
Therefore, the focus of this work was first to install new experimental setups for the investigation of the charge carrier dynamics in complete devices with superior sensitivity and time resolution and, second, to apply these methods to prototypical photovoltaic materials to address specific questions in the field of organic and hybrid photovoltaics.
Regarding the first goal, a new setup combining transient absorption spectroscopy (TAS) and time delayed collection field (TDCF) was designed and installed in Potsdam. An important part of this work concerned the improvement of the electronic components with respect to time resolution and sensitivity. To this end, a highly sensitive amplifier for driving and detecting the device response in TDCF was developed. This system was then applied to selected organic and hybrid model systems with a particular focus on the understanding of the loss mechanisms that limit the fill factor and short circuit current of organic solar cells.
The first model system was a hybrid photovoltaic material comprising inorganic quantum dots decorated with organic ligands. Measurements with TDCF revealed fast free carrier recombination, in part assisted by traps, while bias-assisted charge extraction measurements showed high mobility. The measured parameters then served as input for a successful description of the device performance with an analytical model.
With a further improvement of the instrumentation, a second topic was the detailed analysis of non-geminate recombination in a disordered polymer:fullerene blend where an important question was the effect of disorder on the carrier dynamics. The measurements revealed that early time highly mobile charges undergo fast non-geminate recombination at the contacts, causing an apparent field dependence of free charge generation in TDCF experiments if not conducted properly. On the other hand, recombination the later time scale was determined by dispersive recombination in the bulk of the active layer, showing the characteristics of carrier dynamics in an exponential density of state distribution. Importantly, the comparison with steady state recombination data suggested a very weak impact of non-thermalized carriers on the recombination properties of the solar cells under application relevant illumination conditions.
Finally, temperature and field dependent studies of free charge generation were performed on three donor-acceptor combinations, with two donor polymers of the same material family blended with two different fullerene acceptor molecules. These particular material combinations were chosen to analyze the influence of the energetic and morphology of the blend on the efficiency of charge generation. To this end, activation energies for photocurrent generation were accurately determined for a wide range of excitation energies. The results prove that the formation of free charge is via thermalized charge transfer states and does not involve hot exciton splitting. Surprisingly, activation energies were of the order of thermal energy at room temperature. This led to the important conclusion that organic solar cells perform well not because of predominate high energy pathways but because the thermalized CT states are weakly bound. In addition, a model is introduced to interconnect the dissociation efficiency of the charge transfer state with its recombination observable with photoluminescence, which rules out a previously proposed two-pool model for free charge formation and recombination. Finally, based on the results, proposals for the further development of organic solar cells are formulated.
„Blame it on the Russians“
(2019)
Idioms in the World
(2019)
Sociometrically neglected children are not often liked and not often disliked by their peers. This kind of social information is known as social status. Evidence concerning internalizing behaviour of neglected children is as yet equivocal. Contradictory research results could possibly be attributed to methodological issues of social status classification methods. Therefore, we will paradigmatically emphasize insufficiencies of one social status classification method. Since arbitrary cutoffs (sociometric data) provide the basis for the categorical classification of social status groups, the classification approach lacks precision and consistency. Furthermore, social status classification discounts the multidimensional nature of a child’s social status (social status group affiliation is mutually exclusive), disregards between-peer-group differences in the sociometric data, and offers a peer-group-norm-referenced interpretation. By contrast, we will highlight some advantages of the newly introduced social status extreme points procedure, which describes a child’s social status in terms of the child’s adaptation to sociometric extreme points. The continuous social status extreme points variables offer a criterion-referenced interpretation (multidimensionality: degree of adaptation to each and every sociometric extreme point). The performance and agreement of both methods will be demonstrated using empirical data (N = 316 children within 22 school classes).
Once the “popular plaything of Realpolitiker” the doctrine of rebus sic stantibus post the 1969 VCLT is often described as an objective rule by which, on grounds of equity and justice, a fundamental change of circumstances may be invoked as a ground for termination. Yet recent practice from States such as Ecuador, Russia, Denmark and the United Kingdom suggests that it is returning with a new livery. They point to an understanding based on vital States’ interests––a view popular among scholars such as Erich Kaufmann at the beginning of the last century.
The present work is a compilation of three original research articles submitted (or already published) in international peer-reviewed venues of the field of speech science. These three articles address the topics of fundamental motor laws in speech and dynamics of corresponding speech movements:
1. Kuberski, Stephan R. and Adamantios I. Gafos (2019). "The speed-curvature power law in tongue movements of repetitive speech". PLOS ONE 14(3). Public Library of Science. doi: 10.1371/journal.pone.0213851.
2. Kuberski, Stephan R. and Adamantios I. Gafos (In press). "Fitts' law in tongue movements of repetitive speech". Phonetica: International Journal of Phonetic Science. Karger Publishers. doi: 10.1159/000501644
3. Kuberski, Stephan R. and Adamantios I. Gafos (submitted). "Distinct phase space topologies of identical phonemic sequences". Language. Linguistic Society of America.
The present work introduces a metronome-driven speech elicitation paradigm in which participants were asked to utter repetitive sequences of elementary consonant-vowel syllables. This paradigm, explicitly designed to cover speech rates from a substantially wider range than has been explored so far in previous work, is demonstrated to satisfy the important prerequisites for assessing so far difficult to access aspects of speech. Specifically, the paradigm's extensive speech rate manipulation enabled elicitation of a great range of movement speeds as well as movement durations and excursions of the relevant effectors. The presence of such variation is a prerequisite to assessing whether invariant relations between these and other parameters exist and thus provides the foundation for a rigorous evaluation of the two laws examined in the first two contributions of this work.
In the data resulting from this paradigm, it is shown that speech movements obey the same fundamental laws as movements from other domains of motor control do. In particular, it is demonstrated that speech strongly adheres to the power law relation between speed and curvature of movement with a clear speech rate dependency of the power law's exponent. The often-sought or reported exponent of one third in the statement of the law is unique to a subclass of movements which corresponds to the range of faster rates under which a particular utterance is produced. For slower rates, significantly larger values than one third are observed. Furthermore, for the first time in speech this work uncovers evidence for the presence of Fitts' law. It is shown that, beyond a speaker-specific speech rate, speech movements of the tongue clearly obey Fitts' law by emergence of its characteristic linear relation between movement time and index of difficulty. For slower speech rates (when temporal pressure is small), no such relation is observed. The methods and datasets obtained in the two assessment above provide a rigorous foundation both for addressing implications for theories and models of speech as well as for better understanding the status of speech movements in the context of human movements in general.
All modern theories of language rely on a fundamental segmental hypothesis according to which the phonological message of an utterance is represented by a sequence of segments or phonemes. It is commonly assumed that each of these phonemes can be mapped to some unit of speech motor action, a so-called speech gesture.
For the first time here, it is demonstrated that the relation between the phonological description of simple utterances and the corresponding speech motor action is non-unique. Specifically, by the extensive speech rate manipulation in the herein used experimental paradigm it is demonstrated that speech exhibits clearly distinct dynamical organizations underlying the production of simple utterances. At slower speech rates, the dynamical organization underlying the repetitive production of elementary /CV/ syllables can be described by successive concatenations of closing and opening gestures, each with its own equilibrium point. As speech rate increases, the equilibria of opening and closing gestures are not equally stable yielding qualitatively different modes of organization with either a single equilibrium point of a combined opening-closing gesture or a periodic attractor unleashed by the disappearance of both equilibria. This observation, the non-uniqueness of the dynamical organization underlying what on the surface appear to be identical phonemic sequences, is an entirely new result in the domain of speech. Beyond that, the demonstration of periodic attractors in speech reveals that dynamical equilibrium point models do not account for all possible modes of speech motor behavior.
Open-circuit voltages of lead-halide perovskite solar cells are improving rapidly and are approaching the thermodynamic limit. Since many different perovskite compositions with different bandgap energies are actively being investigated, it is not straightforward to compare the open-circuit voltages between these devices as long as a consistent method of referencing is missing. For the purpose of comparing open-circuit voltages and identifying outstanding values, it is imperative to use a unique, generally accepted way of calculating the thermodynamic limit, which is currently not the case. Here a meta-analysis of methods to determine the bandgap and a radiative limit for open-circuit voltage is presented. The differences between the methods are analyzed and an easily applicable approach based on the solar cell quantum efficiency as a general reference is proposed.
The title compounds, 2-azaspiro[4.5]deca-1-one, C₉H₁₅NO, (1a), cis-8-methyl-2-azaspiro[4.5]deca-1-one, C₁₀H₁₇NO, (1b), and trans-8-methyl-2-azaspiro[4.5]deca-1-one, C₁₀H₁₇NO, (1c), were synthesized from benzoic acids 2 in only 3 steps in high yields. Crystallization from n-hexane afforded single crystals, suitable for X-ray diffraction. Thus, the configurations, conformations, and interesting crystal packing effects have been determined unequivocally. The bicyclic skeleton consists of a lactam ring, attached by a spiro junction to a cyclohexane ring. The lactam ring adopts an envelope conformation and the cyclohexane ring has a chair conformation. The main difference between compound 1b and compound 1c is the position of the carbonyl group on the 2-pyrrolidine ring with respect to the methyl group on the 8-position of the cyclohexane ring, which is cis (1b) or trans (1c). A remarkable feature of all three compounds is the existence of a mirror plane within the molecule. Given that all compounds crystallize in centrosymmetric space groups, the packing always contains interesting enantiomer-like pairs. Finally, the structures are stabilized by intermolecular N–H···O hydrogen bonds.
Earthquake swarms are characterized by large numbers of events occurring in a short period of time within a confined source volume and without significant mainshock aftershock pattern as opposed to tectonic sequences. Intraplate swarms in the absence of active volcanism usually occur in continental rifts as for example in the Eger Rift zone in North West Bohemia, Czech Republic. A common hypothesis links event triggering to pressurized fluids. However, the exact causal chain is often poorly understood since the underlying geotectonic processes are slow compared to tectonic sequences. The high event rate during active periods challenges standard seismological routines as these are often designed for single events and therefore costly in terms of human resources when working with phase picks or computationally costly when exploiting full waveforms.
This methodological thesis develops new approaches to analyze earthquake swarm seismicity as well as the underlying seismogenic volume. It focuses on the region of North West (NW) Bohemia, a well studied, well monitored earthquake swarm region.
In this work I develop and test an innovative approach to detect and locate earthquakes using deep convolutional neural networks. This technology offers great potential as it allows to efficiently process large amounts of data which becomes increasingly important given that seismological data storage grows at increasing pace. The proposed deep neural network trained on NW Bohemian earthquake swarm records is able to locate 1000 events in less than 1 second using full waveforms while approaching precision of double difference relocated catalogs. A further technological novelty is that the trained filters of the deep neural network’s first layer can be repurposed to function as a pattern matching event detector without additional training on noise datasets. For further methodological development and benchmarking, I present a new toolbox to generate realistic earthquake cluster catalogs as well as synthetic full waveforms of those clusters in an automated fashion. The input is parameterized using constraints on source volume geometry, nucleation and frequency-magnitude relations. It harnesses recorded noise to produce highly realistic synthetic data for benchmarking and development. This tool is used to study and assess detection performance in terms of magnitude of completeness Mc of a full waveform detector applied to synthetic data of a hydrofracturing experiment at the Wysin site, Poland.
Finally, I present and demonstrate a novel approach to overcome the masking effects of wave propagation between earthquake and stations and to determine source volume attenuation directly in the source volume where clustered earthquakes occur. The new event couple spectral ratio approach exploits high frequency spectral slopes of two events sharing the greater part of their rays. Synthetic tests based on the toolbox mentioned before show that this method is able to infer seismic wave attenuation within the source volume at high spatial resolution. Furthermore, it is independent from the distance towards a station as well as the complexity of the attenuation and velocity structure outside of the source volume of swarms. The application to recordings of the NW Bohemian earthquake swarm shows increased P phase attenuation within the source volume (Qp < 100) based on results at a station located close to the village Luby (LBC). The recordings of a station located in epicentral proximity, close to Nový Kostel (NKC), show a relatively high complexity indicating that waves arriving at that station experience more scattering than signals recorded at other stations. The high level of complexity destabilizes the inversion. Therefore, the Q estimate at NKC is not reliable and an independent proof of the high attenuation finding given the geometrical and frequency constraints is still to be done. However, a high attenuation in the source volume of NW Bohemian swarms has been postulated before in relation to an expected, highly damaged zone bearing CO 2 at high pressure.
The methods developed in the course of this thesis yield the potential to improve our understanding regarding the role of fluids and gases in intraplate event clustering.
A growing demand for natural resources embedded in current changes of the international order will put pressure on states to secure the future availability of these resources. Some political discourses suggest that states might respond by challenging the foundations of international law. Whereas the UN Charter was inter alia aimed at eliminating uses of force for economic reasons, one may observe an on-going trend of securitization of matters of resource supply resulting into the revival of self-preservation doctrines. The chapter will show that those claims lack a normative foundation in the current framework of the prohibition of the use of force. Moreover, international law has sufficient instruments to cope with disputes over access to resources by other means than the use of force. The international community, therefore, must oppose claims that may contribute to normative uncertainties and strengthen already existing instruments of pacific settlement of disputes.
The paper aims to lay out a framework for evaluating value shifts in the international legal order for the purposes of a forthcoming book. In view of current contestations it asks whether we are observing yet another period of norm change (Wandel) or even a more fundamental transformation of international law – a metamorphosis (Verwandlung). For this purpose it suggests to look into the mechanisms of how norms change from the perspective of legal and political science and also to approximate a reference point where change turns into metamorphosis. It submits that such a point may be reached where specific legally protected values are indeed changing (change of legal values) or where the very idea of protecting certain values through law is renounced (delegalizing of values). The paper discusses the benefits of such an interdisciplinary exchange and tries to identify differences and commonalities among both disciplinary perspectives.
The worldwide populist wave has contributed to a perception that international law is currently in a state of crisis. This article examines in how far populist governments have challenged prevailing interpretations of international law. The article links structural features of populism with an analysis of populist governmental strategies and argumentative practices. It demonstrates that, in their rhetoric, populist governments promote an understanding of international law as a mere law of coordination. This is, however, not entirely reflected in their legal practices where an instrumental, cherry-picking approach prevails. The article concludes that policies of populist governments affect the current state of international law on two different levels: In the political sphere their practices alter the general environment in which legal rules are interpreted. In the legal sphere populist governments push for changes in the interpretation of established international legal rules. The article substantiates these propositions by focusing on the principle of nonintervention and foreign funding for NGOs.