Refine
Has Fulltext
- yes (395) (remove)
Year of publication
- 2019 (395) (remove)
Document Type
- Postprint (204)
- Doctoral Thesis (103)
- Article (37)
- Working Paper (31)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Part of Periodical (3)
- Report (2)
- Review (2)
- Bachelor Thesis (1)
Language
- English (395) (remove)
Keywords
- morphology (26)
- Informationsstruktur (24)
- Morphologie (24)
- information structure (24)
- linguistics (24)
- syntax (24)
- Festschrift (23)
- Linguistik (23)
- Syntax (23)
- festschrift (23)
Institute
- Institut für Biochemie und Biologie (46)
- Department Linguistik (37)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Institut für Geowissenschaften (35)
- Institut für Physik und Astronomie (27)
- Institut für Chemie (24)
- Strukturbereich Kognitionswissenschaften (24)
- Wirtschaftswissenschaften (19)
- Humanwissenschaftliche Fakultät (17)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (16)
”Thanks in Advance”
(2019)
This paper studies the effect of the commonly used phrase “thanks in advance” on compliance with a small request. In a controlled laboratory experiment we ask participants to give a detailed answer to an open question. The treatment variable is whether or not they see the phrase “thanks in advance.” Our participants react to the treatment by exerting less effort in answering the request even though they perceive the phrase as polite.
“Mason without apron”
(2019)
While the lack of religion in Alexander von Humboldt’s work and the criticism he received is well known, his relationship with Freemasonry is relatively unexplored. Humboldt appears on some lists of “illustrious Masons,” and several lodges carry his name, but was he really a member? If so, when and where did he join a lodge? Are there any comments by him about Freemasonry? Who were the renowned Masons he was surrounded by? This paper examines these questions, but more importantly it analyzes what a membership might have meant for Humboldt’s scholarly work. It looks particularly at the unprecedented success he enjoyed in the United States in the early 19th century and the factors behind it. What could he have gained from these connections and how was he viewed by Masonic leaders and lodges in the trans-Atlantic world?
“I mean, no soy psicóloga”
(2019)
This paper is concerned with the qualitative analysis of the use of the English discourse marker I mean in Spanish and Portuguese online discourses (in online fora, blogs or user comments on websites). The examples are retrieved from the Corpus del Español (Web/ Dialects) as well as the Corpus do Português (Web/ Dialects).
Yiddish in the Andes
(2019)
This article elucidates the efforts of Chilean-Jewish activists to create, manage and protect Chilean Yiddish culture. It illuminates how Yiddish cultural leaders in small diasporas, such as Chile, worked to maintain dialogue with other Jewish centers. Chilean culturists maintained that a unique Latin American Jewish culture existed and needed to be strengthened through the joint efforts of all Yiddish actors on the continent. Chilean activists envisioned a modern Jewish culture informed by both Eastern European influences and local Jewish cultural production, as well as by exchanges with non-Jewish Latin American majority cultures.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
Genetic divergence is impacted by many factors, including phylogenetic history, gene flow, genetic drift, and divergent selection. Rotifers are an important component of aquatic ecosystems, and genetic variation is essential to their ongoing adaptive diversification and local adaptation. In addition to coding sequence divergence, variation in gene expression may relate to variable heat tolerance, and can impose ecological barriers within species. Temperature plays a significant role in aquatic ecosystems by affecting species abundance, spatio-temporal distribution, and habitat colonization. Recently described (formerly cryptic) species of the Brachionus calyciflorus complex exhibit different temperature tolerance both in natural and in laboratory studies, and show that B. calyciflorus sensu stricto (s.s.) is a thermotolerant species. Even within B. calyciflorus s.s., there is a tendency for further temperature specializations. Comparison of expressed genes allows us to assess the impact of stressors on both expression and sequence divergence among disparate populations within a single species. Here, we have used RNA-seq to explore expressed genetic diversity in B. calyciflorus s.s. in two mitochondrial DNA lineages with different phylogenetic histories and differences in thermotolerance. We identify a suite of candidate genes that may underlie local adaptation, with a particular focus on the response to sustained high or low temperatures. We do not find adaptive divergence in established candidate genes for thermal adaptation. Rather, we detect divergent selection among our two lineages in genes related to metabolism (lipid metabolism, metabolism of xenobiotics).
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
What Makes an Employer?
(2019)
As the policy debate on entrepreneurship increasingly centers on firm growth in terms of job creation, it is important to better understand which variables influence the first hiring decision and which ones influence the subsequent survival as an employer. Using the German Socio-economic Panel (SOEP), we analyze what role individual characteristics of entrepreneurs play in sustainable job creation. While human and social capital variables positively influence the hiring decision and the survival as an employer in the same direction, we show that none of the personality traits affect the two outcomes in the same way. Some traits are only relevant for survival as an employer but do not influence the hiring decision, other traits even unfold a revolving door effect, in the sense that employers tend to fail due to the same characteristics that positively influenced their hiring decision.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Verum focus and negation
(2019)
Many human infants grow up learning more than one language simultaneously but only recently has research started to study early language acquisition in this population more systematically. The paper gives an overview on findings on early language acquisition in bilingual infants during the first two years of life and compares these findings to current knowledge on early language acquisition in monolingual infants. Given the state of the research, the overview focuses on research on phonological and early lexical development in the first two years of life. We will show that the developmental trajectory of early language acquisition in these areas is very similar in mono- and bilingual infants suggesting that these early steps into language are guided by mechanisms that are rather robust against the differences in the conditions of language exposure that mono- and bilingual infants typically experience.
Word forms such as walked or walker are decomposed into their morphological constituents (walk + -ed/-er) during language comprehension. Yet, the efficiency of morphological decomposition seems to vary for different languages and morphological types, as well as for first and second language speakers. The current study reports results from a visual masked priming experiment focusing on different types of derived word forms (specifically prefixed vs. suffixed) in first and second language speakers of German. We compared the present findings with results from previous studies on inflection and compounding and proposed an account of morphological decomposition that captures both the variability and the consistency of morphological decomposition for different morphological types and for first and second language speakers. Open Practices This article has been awarded an Open Materials badge. Study materials are publicly accessible via the Open Science Framework at . Learn more about the Open Practices badges from the Center for Open Science.
This paper – which is based on the Thomas Franck Lecture held by the author at Humboldt University Berlin on 13 May 2019 – argues that the most likely development of international to be expected will be the coexistence of two “legal worlds”. On the one hand, an inter-State law brutally regulating political relations between human groups whitewashed by nationalism; on the other hand, a transnational or “a-national” law regulating economic relations between private as well as public interests. Further, the paper argues that there are two obvious victims – of very different nature – of this foreseeable evolution: the human being on the one hand, the certainty and effectiveness of the rule of law itself on the other hand.
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
Water is essential to life and thus, an essential resource. However, freshwater resources are limited and their maintenance is crucial. Pollution with chemicals and pathogens through urbanization and a growing population impair the quality of freshwater. Furthermore, water can serve as vector for the transmission of pathogens resulting in water-borne illness.
The Interdisciplinary Research Group III – "Water" of the Leibniz alliance project INFECTIONS‘21 investigated water as a hub for pathogens focusing on Clostridioides difficile and avian influenza A viruses that may be shed into the water. Another aim of this study was to characterize the bacterial communities in a wastewater treatment plant (WWTP) of the capital Berlin, Germany to further assess potential health risks associated with wastewater management practices.
Bacterial communities of WWTP inflow and effluent differed significantly. The proportion of fecal/enteric bacteria was relatively low and OTUs related to potential enteric pathogens were largely removed from inflow to effluent. However, a health risk might exist as an increased relative abundance of potential pathogenic Legionella spp. such as L. lytica was observed. Three Clostridioides difficile isolates from wastewater inflow and an urban bathing lake in Berlin (‗Weisser See‘) were obtained and sequenced. The two isolates from the wastewater did not carry toxin genes, whereas the isolate from the lake was positive for the toxin genes. All three isolates were closely related to human strains. This indicates a potential, but rather sporadic health risk. Avian influenza A viruses were detected in 38.8% of sediment samples by PCR, but virus isolation failed. An experiment with inoculated freshwater and sediment samples showed that virus isolation from sediment requires relatively high virus concentrations and worked much better in Madin-Darby Canine Kidney (MDCK) cell cultures than in embryonated chicken eggs, but low titre of influenza contamination in freshwater samples was sufficient to recover virus.
In conclusion, this work revealed potential health risks coming from bacterial groups with pathogenic potential such as Legionella spp. whose relative abundance is higher in the released effluent than in the inflow of the investigated WWTP. It further indicates that water bodies such as wastewater and lake sediments can serve as reservoir and vector, even for non-typical water-borne or water-transmitted pathogens such as C. difficile.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
Previous research offers equivocal results regarding the effect of
social networking site use on individuals’ self-esteem. We con-
duct a systematic literature review to examine the existing litera-
ture and develop a theoretical framework in order to classify the
results. The framework proposes that self-esteem is affected by
three distinct processes that incorporate self-evaluative informa-
tion: social comparison processes, social feedback processing,
and self-reflective processes. Due to particularities of the social
networking site environment, the accessibility and quality of self-
evaluative information is altered, which leads to online-specific
effects on users’ self-esteem. Results of the reviewed studies
suggest that when a social networking site is used to compare
oneself with others, it mostly results in decreases in users’ self-
esteem. On the other hand, receiving positive social feedback
from others or using these platforms to reflect on one’s own self is
mainly associated with benefits for users’ self-esteem.
Nevertheless, inter-individual differences and the specific activ-
ities performed by users on these platforms should be considered
when predicting individual effects.
Undisclosed desires
(2019)
Following decades of quality management featuring in higher education settings, questions regarding its implementation, impact and outcomes remain. Indeed, leaving aside anecdotal case studies and value-laden documentaries of best practice, current research still knows very little about the implementation of quality management in teaching and learning within higher education institutions. Referring to data collected from German higher education institutions in which a quality management department or functional equivalent was present, this article theorises and provides evidence for the supposition that the implementation of quality management follows two implicit logics. Specifically, it tends either towards the logic of appropriateness or, contrastingly, towards the logic of consequentialism. This study’s results also suggest that quality managers’ socialisation is related to these logics and that it influences their views on quality management in teaching and learning.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
We study travelling chimera states in a ring of nonlocally coupled heterogeneous (with Lorentzian distribution of natural frequencies) phase oscillators. These states are coherence-incoherence patterns moving in the lateral direction because of the broken reflection symmetry of the coupling topology. To explain the results of direct numerical simulations we consider the continuum limit of the system. In this case travelling chimera states correspond to smooth travelling wave solutions of some integro-differential equation, called the Ott–Antonsen equation, which describes the long time coarse-grained dynamics of the oscillators. Using the Lyapunov–Schmidt reduction technique we suggest a numerical approach for the continuation of these travelling waves. Moreover, we perform their linear stability analysis and show that travelling chimera states can lose their stability via fold and Hopf bifurcations. Some of the Hopf bifurcations turn out to be supercritical resulting in the observation of modulated travelling chimera states.
This paper addresses issues of translating both words and rituals as Muslim cemetery keepers care for Jewish graves and recite traditional prayers for the dead in Morocco. Several issues of translation must be dealt with while considering these rare and disappearing practices. The first issue to be discussed is the translation of Hebrew inscriptions into French by cemetery keepers. One cemetery keeper in Meknes has tried to compile an exhaustive index of the names and dates represented on the gravestones under her care. The Muslim guard of the Jewish cemetery in Sefrou, on the other hand, has somewhat famously told visitors differing stories about his ability and willingness to pray the Kaddish over the graves of emigrated relatives who cannot return to mark an anniversary death. These practices provide the context for considering how the act of Muslims caring for Jewish graves creates linguistic and ritual translations of traditional Jewish ancestor care.
Synchronization – the adjustment of rhythms among coupled self-oscillatory systems – is a fascinating dynamical phenomenon found in many biological, social, and technical systems.
The present thesis deals with synchronization in finite ensembles of weakly coupled self-sustained oscillators with distributed frequencies.
The standard model for the description of this collective phenomenon is the Kuramoto model – partly due to its analytical tractability in the thermodynamic limit of infinitely many oscillators. Similar to a phase transition in the thermodynamic limit, an order parameter indicates the transition from incoherence to a partially synchronized state. In the latter, a part of the oscillators rotates at a common frequency. In the finite case, fluctuations occur, originating from the quenched noise of the finite natural frequency sample.
We study intermediate ensembles of a few hundred oscillators in which fluctuations are comparably strong but which also allow for a comparison to frequency distributions in the infinite limit.
First, we define an alternative order parameter for the indication of a collective mode in the finite case. Then we test the dependence of the degree of synchronization and the mean rotation frequency of the collective mode on different characteristics for different coupling strengths.
We find, first numerically, that the degree of synchronization depends strongly on the form (quantified by kurtosis) of the natural frequency sample and the rotation frequency of the collective mode depends on the asymmetry (quantified by skewness) of the sample. Both findings are verified in the infinite limit.
With these findings, we better understand and generalize observations of other authors. A bit aside of the general line of thoughts, we find an analytical expression for the volume contraction in phase space.
The second part of this thesis concentrates on an ordering effect of the finite-size fluctuations. In the infinite limit, the oscillators are separated into coherent and incoherent thus ordered and disordered oscillators. In finite ensembles, finite-size fluctuations can generate additional order among the asynchronous oscillators. The basic principle – noise-induced synchronization – is known from several recent papers. Among coupled oscillators, phases are pushed together by the order parameter fluctuations, as we on the one hand show directly and on the other hand quantify with a synchronization measure from directed statistics between pairs of passive oscillators.
We determine the dependence of this synchronization measure from the ratio of pairwise natural frequency difference and variance of the order parameter fluctuations. We find a good agreement with a simple analytical model, in which we replace the deterministic fluctuations of the order parameter by white noise.
We combine ultrafast X-ray diffraction (UXRD) and time-resolved Magneto-Optical Kerr Effect (MOKE) measurements to monitor the strain pulses in laser-excited TbFe2/Nb heterostructures. Spatial separation of the Nb detection layer from the laser excitation region allows for a background-free characterization of the laser-generated strain pulses. We clearly observe symmetric bipolar strain pulses if the excited TbFe2 surface terminates the sample and a decomposition of the strain wavepacket into an asymmetric bipolar and a unipolar pulse, if a SiO2 glass capping layer covers the excited TbFe2 layer. The inverse magnetostriction of the temporally separated unipolar strain pulses in this sample leads to a MOKE signal that linearly depends on the strain pulse amplitude measured through UXRD. Linear chain model simulations accurately predict the timing and shape of UXRD and MOKE signals that are caused by the strain reflections from multiple interfaces in the heterostructure.
Towards Eurasia
(2019)
In order to heed the call in world literature studies to work against disciplinary Eurocentrism by refiguring both what constitutes world literature and how this is read, in this article I propose world literature as an archive of world-making practices and as an impulse for the articulation of alternative methodological approaches. This takes world literature from the postcolonial South as, following Pheng Cheah, instantiating a modality of world literature in which the need for imagining worlds with alternative centres to those determined by coloniality is particularly acute. A response to this is facilitated and illustrated by a reading of Bengali poet Rabindranath Tagore’s Letters from Russia (1930), and South African writer/activist Alex La Guma’s A Soviet Journey (1978). By drawing forward connections between the postcolonial South and the former Soviet Union, this complicates traditional colonial arrangements of the colonial ‘centre’ as cradle of civilisation and culture, as well as postcolonial scholarship’s cumulative fetishisation of ‘Europe’, by allowing a reshuffling of the co-ordinates determining ‘centres’ and ‘peripheries’ and a more nuanced grasp of ‘Europe’ simultaneously. These imaginative journeys destabilise ‘Europe’ as closed category and call forth Eurasia as a more appropriate categorical–cartographical framework for thinking this space and the connections and (hi)story-telling it stages and fosters.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
The Postmasburg Manganese Field (PMF), Northern Cape Province, South Africa, once represented one of the largest sources of manganese ore worldwide. Two belts of manganese ore deposits have been distinguished in the PMF, namely the Western Belt of ferruginous manganese ores and the Eastern Belt of siliceous manganese ores. Prevailing models of ore formation in these two belts invoke karstification of manganese-rich dolomites and residual accumulation of manganese wad which later underwent diagenetic and low-grade metamorphic processes. For the most part, the role of hydrothermal processes and metasomatic alteration towards ore formation has not been adequately discussed. Here we report an abundance of common and some rare Al-, Na-, K- and Ba-bearing minerals, particularly aegirine, albite, microcline, banalsite, sérandite-pectolite, paragonite and natrolite in Mn ores of the PMF, indicative of hydrothermal influence. Enrichments in Na, K and/or Ba in the ores are generally on a percentage level for most samples analysed through bulk-rock techniques. The presence of As-rich tokyoite also suggests the presence of As and V in the hydrothermal fluid. The fluid was likely oxidized and alkaline in nature, akin to a mature basinal brine. Various replacement textures, particularly of Na- and K- rich minerals by Ba-bearing phases, suggest sequential deposition of gangue as well as ore-minerals from the hydrothermal fluid, with Ba phases being deposited at a later stage. The stratigraphic variability of the studied ores and their deviation from the strict classification of ferruginous and siliceous ores in the literature, suggests that a re-evaluation of genetic models is warranted. New Ar-Ar ages for K-feldspars suggest a late Neoproterozoic timing for hydrothermal activity. This corroborates previous geochronological evidence for regional hydrothermal activity that affected Mn ores at the PMF but also, possibly, the high-grade Mn ores of the Kalahari Manganese Field to the north. A revised, all-encompassing model for the development of the manganese deposits of the PMF is then proposed, whereby the source of metals is attributed to underlying carbonate rocks beyond the Reivilo Formation of the Campbellrand Subgroup. The main process by which metals are primarily accumulated is attributed to karstification of the dolomitic substrate. The overlying Asbestos Hills Subgroup banded iron formation (BIF) is suggested as a potential source of alkali metals, which also provides a mechanism for leaching of these BIFs to form high-grade residual iron ore deposits.
The role of case and animacy in biand monolingual children’s sentence interpretation in German
(2019)
German-speaking children appear to have a strong N1-bias when interpreting non-canonical OVSsentences. During sentence interpretation, especially unambiguous accusative and dative case markers (den ‘the-ACC’ and dem ‘the-DAT’) weaken the N1-bias and help building up sentence interpretation strategies on the basis of morphological cues. Still, the N1-bias prevails beyond the age of five (Brandt et al. 2016, Cristante 2016, Dittmar et al. 2008) and remains until puberty (Lidzba et al. 2013). This paper investigates whether prototypical case-animacy coalitions (denACC + N INANIMATE and demDAT + N ANIMATE ) strengthen a morphologically based sentence interpretation strategy in German. The experiment discussed in this paper tests for effects of such case-animacy coalitions in mono- and bilingual primary school children. 20 German monolinguals, 12 Dutch-German and 17 Russian-German bilinguals with a mean age of 9;6 were tested in a forced-choice off-line experiment. Results indicate that case-animacy coalitions weaken the N1-bias in OVS-conditions in German monolinguals and Dutch-German bilinguals, while no effects were found for Russian-German bilinguals. Together with an analysis of individual differences, these group-specific effects are discussed in terms of a developmental approach that represents a gradual cue strength adjustment process in mono- and bilingual children.
The Role of Bargaining Power
(2019)
Neoclassical theory omits the role of bargaining power in the determination of wages. As a result, the importance of changes in the bargaining position for the development of income shares in the last decades is underestimated. This paper presents a theoretical argument why collective bargaining power is a main determinant of workers’ share of income and how its decline contributed to the severe changes in the distribution of income since the 1980s. In order to confirm this hypothesis, a panel data regression analysis is performed that suggests that unions significantly influence the distribution of income in developed countries.
The public encounter
(2019)
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
The politics of zoom
(2019)
Following the mandate in the Paris Agreement for signatories to provide “climate services” to their constituents, “downscaled” climate visualizations are proliferating. But the process of downscaling climate visualizations does not neutralize the political problems with their synoptic global sources—namely, their failure to empower communities to take action and their replication of neoliberal paradigms of globalization. In this study we examine these problems as they apply to interactive climate‐visualization platforms, which allow their users to localize global climate information to support local political action. By scrutinizing the political implications of the “zoom” tool from the perspective of media studies and rhetoric, we add to perspectives of cultural cartography on the issue of scaling from our fields. Namely, we break down the cinematic trope of “zooming” to reveal how it imports the political problems of synopticism to the level of individual communities. As a potential antidote to the politics of zoom, we recommend a downscaling strategy of connectivity, which associates rather than reduces situated views of climate to global ones.
When dealing with issues that are of high so-cietal relevance, Earth sciences still face a lack of accep-tance, which is partly rooted in insufficient communicationstrategies on the individual and local community level. Toincrease the efficiency of communication routines, sciencehas to transform its outreach concepts to become more awareof individual needs and demands. The “encoding/decoding”concept as well as critical intercultural communication stud-ies can offer pivotal approaches for this transformation.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
Rabbi Jacob ben Isaac of Yanova (d. 1623) is best known as the author of the Ze’enah U-Re’enah; the Melits Yosher (“Intercessor before God”) is one of his lesser known works. It was first published in Lublin in 1622 and reprinted once in Amsterdam in 1688. Like the Ze’enah U-Re’enah, it was a Torah commentary, but composed for men who had some yeshivah education, but who could not continue their studies. The commentary on the Song of Songs by Isaac Sulkes is another Yiddish work that addresses the same audience as the Melits Yosher. The purpose of this article is to bring to scholarly attention an audience that has not been noticed or studied in the previous scholarship on early modern Yiddish literature.
The thesis comprises three experimental studies, which were carried out to unravel the short- as well as the long-term mechanical properties of shale rocks. Short-term mechanical properties such as compressive strength and Young’s modulus were taken from recorded stress-strain curves of constant strain rate tests. Long-term mechanical properties are represented by the time– dependent creep behavior of shales. This was obtained from constant stress experiments, where the test duration ranged from a couple minutes up to two weeks. A profound knowledge of the mechanical behavior of shales is crucial to reliably estimate the potential of a shale reservoir for an economical and sustainable extraction of hydrocarbons (HC). In addition, healing of clay-rich forming cap rocks involving creep and compaction is important for underground storage of carbon dioxide and nuclear waste.
Chapter 1 introduces general aspects of the research topic at hand and highlights the motivation for conducting this study. At present, a shift from energy recovered from conventional resources e.g., coal towards energy provided by renewable resources such as wind or water is a big challenge. Gas recovered from unconventional reservoirs (shale plays) is considered a potential bridge technology.
In Chapter 2, short-term mechanical properties of two European mature shale rocks are presented, which were determined from constant strain rate experiments performed at ambient and in situ deformation conditions (confining pressure, pc ≤ 100 MPa, temperature, T ≤ 125 °C, representing pc, T - conditions at < 4 km depth) using a Paterson– type gas deformation apparatus. The investigated shales were mainly from drill core material of Posidonia (Germany) shale and weathered material of Bowland (United Kingdom) shale. The results are compared with mechanical properties of North American shales. Triaxial compression tests performed perpendicular to bedding revealed semibrittle deformation behavior of Posidonia shale with pronounced inelastic deformation. This is in contrast to Bowland shale samples that deformed brittle and displayed predominantly elastic deformation. The static Young’s modulus, E, and triaxial compressive strength, σTCS, determined from recorded stress-strain curves strongly depended on the applied confining pressure and sample composition, whereas the influence of temperature and strain rate on E and σTCS was minor. Shales with larger amounts of weak minerals (clay, mica, total organic carbon) yielded decreasing E and σTCS. This may be related to a shift from deformation supported by a load-bearing framework of hard phases (e.g., quartz) towards deformation of interconnected weak minerals, particularly for higher fractions of about 25 – 30 vol% weak phases. Comparing mechanical properties determined at reservoir conditions with mechanical data applying effective medium theories revealed that E and σTCS of Posidonia and Bowland shale are close to the lower (Reuss) bound. Brittleness B is often quoted as a measure indicating the response of a shale formation to stimulation and economic production. The brittleness, B, of Posidonia and Bowland shale, estimated from E, is in good agreement with the experimental results. This correlation may be useful to predict B from sonic logs, from which the (dynamic) Young’s modulus can be retrieved.
Chapter 3 presents a study of the long-term creep properties of an immature Posidonia shale. Constant stress experiments (σ = const.) were performed at elevated confining pressures (pc = 50 – 200 MPa) and temperatures (T = 50 – 200 °C) to simulate reservoir pc, T - conditions. The Posidonia shale samples were acquired from a quarry in South Germany. At stresses below ≈ 84 % compressive strength of Posidonia shale, at high temperature and low confining pressure, samples showed pronounced transient (primary) creep with high deformation rates in the semibrittle regime. Sample deformation was mainly accommodated by creep of weak sample constituents and pore space reduction. An empirical power law relation between strain and time, which also accounts for the influence of pc, T and σ on creep strain was formulated to describe the primary creep phase. Extrapolation of the results to a creep period of several years, which is the typical time interval for a large production decline, suggest that fracture closure is unlikely at low stresses. At high stresses as expected for example at the contact between the fracture surfaces and proppants added during stimulation measures, subcritical crack growth may lead to secondary and tertiary creep. An empirical power law is suggested to describe secondary creep of shale rocks as a function of stress, pressure and temperature. The predicted closure rates agree with typical production decline curves recorded during the extraction of hydrocarbons. At the investigated conditions, the creep behavior of Posidonia shale was found to correlate with brittleness, calculated from sample composition.
In Chapter 4 the creep properties of mature Posidonia and Bowland shales are presented. The observed long-term creep behavior is compared to the short-term behavior determined in Chapter 2. Creep experiments were performed at simulated reservoir conditions of pc = 50 – 115 MPa and T = 75 – 150 °C. Similar to the mechanical response of immature Posidonia shale samples investigated in Chapter 3, creep strain rates of mature Bowland and Posidonia shales were enhanced with increasing stress and temperature and decreasing confining pressures. Depending on applied deformation conditions, samples displayed either only a primary (decelerating) or in addition also a secondary (quasi-steady state) and subsequently a tertiary (accelerating) creep phase before failure. At the same deformation conditions, creep strain of Posidonia shale, which is rich in weak constituents, is tremendously higher than of quartz-rich Bowland shale. Typically, primary creep strain is again mostly accommodated by deformation of weak minerals and local pore space reduction. At the onset of tertiary creep most of the deformation was accommodated by micro crack growth. A power law was used to characterize the primary creep phase of Posidonia and Bowland shale. Primary creep strain of shale rocks is inversely correlated to triaxial compressive strength and brittleness, as described in Chapter 2.
Chapter 5 provides a synthesis of the experimental findings and summarizes the major results of the studies presented in Chapters 2 – 4 and potential applications in the Exploration & Production industry.
Chapter 6 gives a brief outlook on potential future experimental research that would help to further improve our understanding of processes leading to fracture closure involving proppant embedment in unconventional shale gas reservoirs. Such insights may allow to improve stimulation techniques aimed at maintaining economical extraction of hydrocarbons over several years.
Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes.
In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively.
We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
The instrumental -er suffix
(2019)
According to recent literature sodium bicarbonate (NaHCO3) has been proposed as a performance enhancing aid by reducing acidosis during exercise. The aim of the current review is to investigate if the duration of exercise is an essential factor for the effect
of NaHCO3. To collect the latest studies from electronic database
of PubMed, study publication time was restricted from December 2006 to December 2016. The search was updated in July 2018. The studies were divided into exercise durations of > 4 or ≤ 4 minutes for easier comparability of their effects in different exercises. Only randomized controlled trials were included in this review. Of the 775 studies, 35 met the inclusion criteria. Study design, subjects, effects as well as outcome criteria were inconsistent throughout the studies. Seventeen of these studies reported
performance enhancing effects after supplementing NaHCO3. Eleven of twenty studies with exercise duration of ≤ 4 minutes showed positive and four diverse results after supplementing NaHCO3. On the other hand six of fifteen studies with an exercise duration of >4 minutes showed performance enhancing and two studies showed diverse results. Consequently, the duration of exercise might be influential for inducing a performance enhancing effect when supplementing NaHCO3, but to which extent, remains unclear due to the inconsistencies in the study results.
The increasing age of worldwide population is a major contributor for the rising prevalence of major pathologies and disease, such as type 2 diabetes, mediated by massive insulin resistance and a decline in functional beta-cell mass, highly associated with an elevated incidence of obesity. Thus, the impact of aging under physiological conditions and in combination with diet-induced metabolic stress on characteristics of pancreatic islets and beta-cells, with the focus on functionality and structural integrity, were investigated in the present dissertation.
Primarily induced by malnutrition due to chronic and excess intake of high caloric diets, containing large amounts of carbohydrates and fats, obesity followed by systemic inflammation and peripheral insulin resistance occurs over time, initiating metabolic stress conditions. Elevated insulin demands initiate an adaptive response by beta-cell mass expansion due to increased proliferation, but prolonged stress conditions drive beta-cell failure and loss. Aging has been also shown to affect beta-cell functionality and morphology, in particular by proliferative limitations. However, most studies in rodents were performed under beta-cell challenging conditions, such as high-fat diet interventions. Thus, in the first part of the thesis (publication I), a characterization of age-related alterations on pancreatic islets and beta-cells was performed by using plasma samples and pancreatic tissue sections of standard diet-fed C57BL/6J wild-type mice in several age groups (2.5, 5, 10, 15 and 21 months).
Aging was accompanied by decreased but sustained islet proliferative potential as well as an induction of cellular senescence. This was associated with a progressive islet expansion to maintain normoglycemia throughout lifespan. Moreover, beta-cell function and mass were not impaired although the formation and accumulation of AGEs occurred, located predominantly in the islet vasculature, accompanied by an induction of oxidative and nitrosative (redox) stress.
The nutritional behavior throughout human lifespan; however, is not restricted to a balanced diet. This emphasizes the significance to investigate malnutrition by the intake of high-energy diets, inducing metabolic stress conditions that synergistically with aging might amplify the detrimental effects on endocrine pancreas. Using diabetes-prone NZO mice aged 7 weeks, fed a dietary regimen of carbohydrate restriction for different periods (young mice - 11 weeks, middle-aged mice - 32 weeks) followed by a carbohydrate intervention for 3 weeks, offered the opportunity to distinguish the effects of diet-induced metabolic stress in different ages on the functionality and integrity of pancreatic islets and their beta-cells (publication II, manuscript).
Interestingly, while young NZO mice exhibited massive hyperglycemia in response to diet-induced metabolic stress accompanied by beta-cell dysfunction and apoptosis, middle-aged animals revealed only moderate hyperglycemia by the maintenance of functional beta-cells. The loss of functional beta-cell mass in islets of young mice was associated with reduced expression of PDX1 transcription factor, increased endocrine AGE formation and related redox stress as well as TXNIP-dependent induction of the mitochondrial death pathway. Although the amounts of secreted insulin and the proliferative potential were comparable in both age groups, islets of middle-aged mice exhibited sustained PDX1 expression, almost regular insulin secretory function, increased capacity for cell cycle progression as well as maintained redox potential.
The results of the present thesis indicate a loss of functional beta-cell mass in young diabetes-prone NZO mice, occurring by redox imbalance and induction of apoptotic signaling pathways. In contrast, aging under physiological conditions in C57BL/6J mice and in combination with diet-induced metabolic stress in NZO mice does not appear to have adverse effects on the functionality and structural integrity of pancreatic islets and beta-cells, associated with adaptive responses on changing metabolic demands. However, considering the detrimental effects of aging, it has to be assumed that the compensatory potential of mice might be exhausted at a later point of time, finally leading to a loss of functional beta-cell mass and the onset and progression of type 2 diabetes.
The polygenic, diabetes-prone NZO mouse is a suitable model for the investigation of human obesity-associated type 2 diabetes. However, mice at advanced age attenuated the diabetic phenotype or do not respond to the dietary stimuli. This might be explained by the middle age of mice, corresponding to the human age of about 38-40 years, in which the compensatory mechanisms of pancreatic islets and beta cells towards metabolic stress conditions are presumably more active.
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
West of Potsdam’s city center lies the Golm Campus, the largest campus of the University of Potsdam. Its different buildings tell of the numerous institutions that were established at this site over the years: From the mid-1930s, the Walther Wever Barracks were located here. From 1943, it housed the Air Intelligence Division of the German Airforce Supreme Commander. In 1951, a training institution of the Ministry of State Security moved in, which existed until 1989 under different names. In July 1991, the newly founded University of Potsdam took over the premises, which are now part of the Potsdam-Golm Science Park.
The book takes you on a historic journey of the place and invites you to take a walk across today’s campus. The book includes over 110 photos and a detailed map.
Almost half of the political life has been experienced under the
state of emergency and state of siege policies in the Turkish
Republic. In spite of such a striking number and continuity in the
deployment of legal emergency powers, there are just a few legal
and political studies examining the reasons for such permanency
in governing practices. To fill this gap, this paper aims to discuss
one of the most important sources of the ‘permanent’ political
crisis in the country: the historical evolution of legal emergency
power. In order to highlight how these policies have intensified
the highly fragile citizenship regime by weakening the separation
of power, repressing the use of political rights and increasing the
discretionary power of both the executive and judiciary authori-
ties, the paper sheds light on the emergence and production of
a specific form of legality based on the idea of emergency and the
principle of executive prerogative. In that context, it aims to
provide a genealogical explanation of the evolution of the excep-
tional form of the nation-state, which is based on the way political
society, representation, and legitimacy have been instituted and
accompanying failure of the ruling classes in building hegemony
in the country.
The Forgotten War: Yemen
(2019)
The conflict in Yemen seems forgotten considering the worldwide severe humanitarian catastrophes. Nevertheless, since the conflict escalated around four years ago, it became one of the worst humanitarian crises in recent history and has no end in sight. Thousands of people were killed even more displaced and the country is facing tremendous food insecurity as well as the world’s largest cholera outbreak. It is no longer just a civil war between the Houthi- and Hadi-Faction. International interests play a major role and made it a proxy war between Saudi Arabia (and its allies) on one side and Iran on the other. This all happens at the expense of the civilian population. Therefore, it is urgent to analyse the actors involved, their interests within the conflict and furthermore searching for possibilities to overcome it.
Creating fonts is a complex task that requires expert knowledge in a variety of domains. Often, this knowledge is not held by a single person, but spread across a number of domain experts. A central concept needed for designing fonts is the glyph, an elemental symbol representing a readable character. Required domains include designing glyph shapes, engineering rules to combine glyphs for complex scripts and checking legibility. This process is most often iterative and requires communication in all directions. This report outlines a platform that aims to enhance the means of communication, describes our prototyping process, discusses complex font rendering and editing in a live environment and an approach to generate code based on a user’s live-edits.
Microplastics (MP) constitute a widespread contaminant all over the globe. Rivers and wastewater treatment plants (WWTP) transport annually several million tons of MP into freshwaters, estuaries and oceans, where they provide increasing artificial surfaces for microbial colonization. As knowledge on MP-attached communities is insufficient for brackish ecosystems, we conducted exposure experiments in the coastal Baltic Sea, an in-flowing river and a WWTP within the drainage basin. While reporting on prokaryotic and fungal communities from the same set-up previously, we focus here on the entire eukaryotic communities. Using high-throughput 18S rRNA gene sequencing, we analyzed the eukaryotes colonizing on two types of MP, polyethylene and polystyrene, and compared them to the ones in the surrounding water and on a natural surface (wood). More than 500 different taxa across almost all kingdoms of the eukaryotic tree of life were identified on MP, dominated by Alveolata, Metazoa, and Chloroplastida. The eukaryotic community composition on MP was significantly distinct from wood and the surrounding water, with overall lower diversity and the potentially harmful dinoflagellate Pfiesteria being enriched on MP. Co-occurrence networks, which include prokaryotic and eukaryotic taxa, hint at possibilities for dynamic microbial interactions on MP. This first report on total eukaryotic communities on MP in brackish environments highlights the complexity of MP-associated biofilms, potentially leading to altered microbial activities and hence changes in ecosystem functions.
Diverse communities can adjust their trait composition to altered environmental conditions, which may strongly influence their dynamics. Previous studies of trait-based models mainly considered only one or two trophic levels, whereas most natural system are at least tritrophic. Therefore, we investigated how the addition of trait variation to each trophic level influences population and community dynamics in a tritrophic model. Examining the phase relationships between species of adjacent trophic levels informs about the strength of top-down or bottom-up control in non-steadystate situations. Phase relationships within a trophic level highlight compensatory dynamical patterns between functionally different species, which are responsible for dampening the community temporal variability. Furthermore, even without trait variation, our tritrophic model always exhibits regions with two alternative states with either weak or strong nutrient exploitation, and correspondingly low or high biomass production at the top level. However, adding trait variation increased the basin of attraction of the high-production state, and decreased the likelihood of a critical transition from the high- to the lowproduction state with no apparent early warning signals. Hence, our study shows that trait variation enhances resource use efficiency, production, stability, and resilience of entire food webs.
Background: Telerehabilitation can contribute to the maintenance of successful rehabilitation regardless of location and time. The aim of this study was to investigate a specific three-month interactive telerehabilitation routine regarding its effectiveness in assisting patients with physical functionality and with returning to work compared to typical aftercare.
Objective: The aim of the study was to investigate a specific three-month interactive telerehabilitation with regard to effectiveness in functioning and return to work compared to usual aftercare.
Methods: From August 2016 to December 2017, 111 patients (mean 54.9 years old; SD 6.8; 54.3% female) with hip or knee replacement were enrolled in the randomized controlled trial. At discharge from inpatient rehabilitation and after three months, their distance in the 6-minute walk test was assessed as the primary endpoint. Other functional parameters, including health related quality of life, pain, and time to return to work, were secondary endpoints.
Results: Patients in the intervention group performed telerehabilitation for an average of 55.0 minutes (SD 9.2) per week. Adherence was high, at over 75%, until the 7th week of the three-month intervention phase. Almost all the patients and therapists used the communication options. Both the intervention group (average difference 88.3 m; SD 57.7; P=.95) and the control group (average difference 79.6 m; SD 48.7; P=.95) increased their distance in the 6-minute-walk-test. Improvements in other functional parameters, as well as in quality of life and pain, were achieved in both groups. The higher proportion of working patients in the intervention group (64.6%; P=.01) versus the control group (46.2%) is of note.
Conclusions: The effect of the investigated telerehabilitation therapy in patients following knee or hip replacement was equivalent to the usual aftercare in terms of functional testing, quality of life, and pain. Since a significantly higher return-to-work rate could be achieved, this therapy might be a promising supplement to established aftercare.
How does the international Rule of Law apply to constrain the conduct of the Executive within a constitutional State that adopts a dualist approach to the reception of international law? This paper argues that, so far from being inconsistent with the concept of the Rule of Law, the Executive within a dualist constitution has a self-enforcing obligation to abide by the obligations of the State under international law. This is not dependent on Parliament’s incorporation of treaty obligations into domestic law. It is the correlative consequence of the allocation to the Executive of the power to conduct foreign relations. The paper develops this argument in response to recent debate in the United Kingdom on whether Ministers have an obligation to comply with international law–a reference that the Government removed from the Ministerial Code. It shows that such an obligation is consistent with both four centuries of the practice of the British State and with principle.
The complete mitochondrial genome of a European fire-bellied toad (Bombina bombina) from Germany
(2019)
The European fire-bellied toad, Bombina bombina, is a small aquatic toad belonging to the family Bombinatoridae. The species is native to the lowlands of Central and Eastern Europe, where population numbers have been in decline in recent past decades. Here, we present the first complete mitochondrial genome of the endangered European fire-bellied toad from Northern Germany recovered using iterative mapping. Phylogenetic analyses including other representatives of the Bombinatoridae placed our German specimen as sister to a Polish B. bombina sequence with high support. This finding is congruent with the postulated Pleistocene history of the species. Our complete mitochondrial genome represents an important resource for further population analysis of the European fire-bellied toad, especially those found within Germany.
There has been much research regarding the perceptions, preferences, behaviour, and responses of people exposed to flooding and other nat- ural hazards. Cross-sectional surveys have been the predominant method applied in such research. While cross-sectional data can provide a snapshot of a respondent’s behaviour and perceptions, it cannot be assumed that the respondent’s perceptions are constant over time. As a result, many important research questions relating to dynamic processes, such as changes in risk perceptions, adaptation behaviour, and resilience cannot be fully addressed by cross-sectional surveys. To overcome these shortcomings, there has been a call for developing longitudinal (or panel) datasets in research on natural hazards, vulnerabilities, and risks. However, experiences with implementing longitudinal surveys in the flood risk domain (FRD), which pose distinct methodological challenges, are largely lacking. The key problems are sample recruitment, attrition rate, and attrition bias. We present a review of the few existing longitudinal surveys in the FRD. In addition, we investigate the potential attrition bias and attrition rates in a panel dataset of flood-affected households in Germany. We find little potential for attrition bias to occur. High attrition rates across longitudinal survey waves are the larger concern. A high attrition rate rapidly depletes the longitudinal sample. To overcome high attrition, longitudinal data should be collected as part of a multisector partnership to allow for sufficient resources to implement sample retention strategies. If flood-specific panels are developed, different sample retention strategies should be applied and evaluated in future research to understand how much-needed longitudinal surveying techniques can be successfully applied to the study of individuals threatened by flooding.
In 2015, Germany introduced a statutory hourly minimum wage that was not only universally binding but also set at a relatively high level. We discuss the short-run effects of this new minimum wage on a wide set of socio-economic outcomes, such as employment and working hours, earnings and wage inequality, dependent and self-employment, as well as reservation wages and satisfaction. We also discuss difficulties in the implementation of the minimum wage and the measurement of its effects related to non-compliance and suitability of data sources. Two years after the minimum wage introduction, the following conclusions can be drawn: while hourly wages increased for low-wage earners, some small negative employment effects are also identifiable. The effects on aspired goals, such as poverty and inequality reduction, have not materialized in the short run. Instead, a tendency to reduce working hours is found, which alleviates the desired positive impact on monthly income. Additionally, the level of non-compliance was substantial in the short run, thus drawing attention to problems when implementing such a wide-reaching policy.
The Book of Radiance
(2019)
International adjudication is currently under assault, encouraging a number of States to withdraw, or to consider withdrawing, from treaties providing for international dispute settlement. This Working Paper argues that the act of treaty withdrawal is not merely as the unilateral executive exercise of the individual sovereign prerogative of a State. International law places checks upon the exercise of withdrawal, recognising that it is an act that of its nature affects the interests of other States parties, which have a collective interest in constraining withdrawal. National courts have a complementary function in restraining unilateral withdrawal in order to support the domestic constitution. The arguments advanced against international adjudication in the name of popular democracy at the national level can serve as a cloak for the exercise of executive power unrestrained by law. The submission by States of their disputes to peaceful settlement through international adjudication is central, not incidental, to the successful operation of the international legal system.
Introduction: Cystic fibrosis (CF) is a genetic disease which disrupts the function of an epithelial surface anion channel, CFTR (cystic fibrosis transmembrane conductance regulator). Impairment to this channel leads to inflammation and infection in the lung causing the majority of morbidity and mortality. However, CF is a multiorgan disease affecting many tissues, including vascular smooth muscle. Studies have revealed young people with cystic fibrosis lacking inflammation and infection still demonstrate vascular endothelial dysfunction, measured per flow-mediated dilation (FMD). In other disease cohorts, i.e. diabetic and obese, endurance exercise interventions have been shown improve or taper this impairment. However, long-term exercise interventions are risky, as well as costly in terms of time and resources. Nevertheless, emerging research has correlated the acute effects of exercise with its long-term benefits and advocates the study of acute exercise effects on FMD prior to longitudinal studies. The acute effects of exercise on FMD have previously not been examined in young people with CF, but could yield insights on the potential benefits of long-term exercise interventions.
The aims of these studies were to 1) develop and test the reliability of the FMD method and its applicability to study acute exercise effects; 2) compare baseline FMD and the acute exercise effect on FMD between young people with and without CF; and 3) explore associations between the acute effects of exercise on FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels.
Methods: Thirty young volunteers (10 people with CF, 10 non-CF and 10 non-CF active matched controls) between the ages of 10 and 30 years old completed blood draws, pulmonary function tests, maximal exercise capacity tests and baseline FMD measurements, before returning approximately 1 week later and performing a 30-min constant load training at 75% HRmax. FMD measurements were taken prior, immediately after, 30 minutes after and 1 hour after constant load training. ANOVAs and repeated measures ANOVAs were employed to explore differences between groups and timepoints, respectively. Linear regression was implemented and evaluated to assess correlations between FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels. For all comparisons, statistical significance was set at a p-value of α < 0.05.
Results: Young people with CF presented with decreased lung function and maximal exercise capacity compared to matched controls. Baseline FMD was also significantly decreased in the CF group (CF: 5.23% v non-CF: 8.27% v non-CF active: 9.12%). Immediately post-training, FMD was significantly attenuated (approximately 40%) in all groups with CF still demonstrating the most minimal FMD. Follow-up measurements of FMD revealed a slow recovery towards baseline values 30 min post-training and improvements in the CF and non-CF active groups 60 min post-training. Linear regression exposed significant correlations between maximal exercise capacity (VO2 peak), BMI and FMD immediately post-training.
Conclusion: These new findings confirm that CF vascular endothelial dysfunction can be acutely modified by exercise and will aid in underlining the importance of exercise in CF populations. The potential benefits of long-term exercise interventions on vascular endothelial dysfunction in young people with CF warrants further investigation.
The mitochondrial ATP-binding cassette (ABC) transporters ABCB7 in humans, Atm1 in yeast and ATM3 in plants, are highly conserved in their overall architecture and particularly in their glutathione binding pocket located within the transmembrane spanning domains. These transporters have attracted interest in the last two decades based on their proposed role in connecting the mitochondrial iron sulfur (Fe–S) cluster assembly with its cytosolic Fe–S cluster assembly (CIA) counterpart. So far, the specific compound that is transported across the membrane remains unknown. In this report we characterized the ABCB7-like transporter Rcc02305 in Rhodobacter capsulatus, which shares 47% amino acid sequence identity with its mitochondrial counterpart. The constructed interposon mutant strain in R. capsulatus displayed increased levels of intracellular reactive oxygen species without a simultaneous accumulation of the cellular iron levels. The inhibition of endogenous glutathione biosynthesis resulted in an increase of total glutathione levels in the mutant strain. Bioinformatic analysis of the amino acid sequence motifs revealed a potential aminotransferase class-V pyridoxal-50-phosphate (PLP) binding site that overlaps with the Walker A motif within the nucleotide binding domains of the transporter. PLP is a well characterized cofactor of L-cysteine desulfurases like IscS and NFS1 which has a role in the formation of a protein-bound persulfide group within these proteins. We therefore suggest renaming the ABCB7-like transporter Rcc02305 in R. capsulatus to PexA for PLP binding exporter. We further suggest that this ABC-transporter in R. capsulatus is involved in the formation and export of polysulfide species to the periplasm.
A one-step moderate energy vibrational emulsification method was successfully employed to produce thermo-responsive olive/silicone-based Janus emulsions stabilized by poly(N,N-diethylacrylamide) carrying 0.7 mol% oleoyl side chains. Completely engulfed emulsion droplets remained stable at room temperature and could be destabilized on demand upon heating to the transition temperature of the polymeric stabilizer. Time-dependent light micrographs demonstrate the temperature-induced breakdown of the Janus droplets, which opens new aspects of application, for instance in biocatalysis.
As digital media infiltrate an increasingly greater proportion of our lives, concern about the possibility of various forms of technology addictions has emerged. For technology addiction, researchers have developed a variety of self-reported scales in areas such as Internet, smartphones, videogames, social network sites (SNS) or television. However, no uniform criteria or definition exists for technology addiction. Utilized dimensions of technology addiction, to measure specific outcomes, lack a conceptual standard. Therefore, linkages between technology areas dimensions have not been examined in a broader way by the research community, in order to develop a uniform technology addiction scale.
In this regard, firstly, a theoretical model was developed in order to extract common technology dimensions. Secondly, a systematic literature review in the areas of Internet, smartphone, video games and SNS was conducted in order to extract the dimensions used. To identify relevant studies, nine databases (GoogleScholar, ScienceDirect, PubMed, EmeraldInsight, Wiley, SpringerLink, ACM, iEEE and JSTOR) were searched, producing 4698 results, and 50 studies met the inclusion criteria. Thirdly, the developed theoretical model was utilized in order to determine the dimension in each of the identified scales.
Based on analysis of the dimensional distributions, the findings suggest that there are common dimensions across areas of technology such as “compulsive use” and “negative outcomes” but also differences in dimensions across areas such as “social comfort” and “mood regulation”, which are more used in the area of SNS. Moreover, new dimensions such as “cognitive absorption” or “utility and function loss" for technology addiction were extracted, which should be considered as these have not yet been researched in a broader way. In addition, no gold standard for the conceptual criteria or definition for technology addiction has been developed yet.
Technical report
(2019)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
The present dissertation about teachers’ cultural diversity beliefs and culturally responsive practices includes a general introduction (Chapter 1), a systematic literature review (Chapter 2), three empirical studies (Chapter 3, 4, and 5) and it ends with a general discussion and conclusion (Chapter 6). The major focus of investigation laid in creating a deeper understanding of teachers’ beliefs about cultural diversity and how those beliefs are related to teaching practices, which could or could not be considered to be culturally responsive. In this dissertation, I relied on insights from theoretical perspectives that derived from the field of psychology such as social cognitive theory and intergroup ideologies, as well as from the field of multicultural education such as culturally responsive teaching.
In Chapter 1, I provide the background of this dissertation, with contextual information regarding the German educational system, the theoretical framework used and the main research objectives of each study.
In Chapter 2, I conducted a systematic review of the existing international studies on trainings addressing cultural diversity beliefs with pre-service teachers. More specifically, the aims of the systematic literature review were (1) to provide a description of main components and contextual characteristics of teacher trainings targeting cultural diversity beliefs, (2) report the training effects, and (3) detail the methodological strengths and weaknesses of these studies. By examining the main components and contextual characteristics of teacher trainings, the effects on beliefs about cultural diversity as well as the methodological strengths and weaknesses of these studies in a single review, I took an integrated approach to these three processes. To review the final pool of studies (N = 36) I used a descriptive and narrative approach, relying primarily on the use of words and text to summarise and explain findings of the synthesis.
The three empirical studies that follow, all highlight aspects of how far and how teacher beliefs about cultural diversity translate into real-world practices in schools. In Chapter 3, to expand the validity of culturally responsive teaching to the German context, I aimed at verifying the dimensional structure of German version of the Culturally Responsive Classroom Management Self-Efficacy Scale (CRCMSES; Siwatu, Putman, Starker-Glass, & Lewis, 2015). I conducted Exploratory and Confirmatory Factor Analysis, and run correlations between the subscales of the CRCMSES and a measure of cultural diversity- related stress. Data (n = 504) used for the first empirical study (Chapter 3) were collected in the InTePP-project (Inclusive Teaching Professionalization Panel) in which pre-service teachers’ competencies and beliefs were assessed longitudinally at two universities: the University of Potsdam and the University of Cologne.
In the second empirical study, which forms Chapter 4, the focus is on teachers’ practices resembling school approaches to cultural diversity. In this study, I investigated two research questions: (1a) What types of descriptive norms regarding cultural diversity are perceived by teachers and students with and without an immigrant background and (1b) what is their degree of congruence? Additionally, I was also interested in how are teachers’ and students’ perceptions of descriptive norms about cultural diversity related to practices and artefacts in the physical and virtual school environment? Data for the second empirical study (Chapter 4) were previously collected in a dissertation project of doctor Maja Schachner funded by the federal program “ProExzellenz” of the Free State of Thuringia. Adopting a mixed-methods research design I conducted a secondary analysis of data from teachers’ (n = 207) and students’ (n = 1,644) gathered in 22 secondary schools in south-west Germany. Additional sources of data in this study were based on pictures of school interiors (hall and corridors) and sixth-grade classrooms’ walls (n = 2,995), and screenshots from each school website (n = 6,499).
Chapter 5 addresses the question of how culturally responsive teaching, teacher cultural diversity beliefs, and self-reflection on own teaching are related. More specifically, in this study I addressed two research questions: (1) How does CRT relate to teachers’ beliefs about incorporating cultural diversity content into daily teaching and learning activities? And (2) how does the level of teachers’ self-reflection on their own teaching relate to CRT?
For this last empirical chapter, I conducted a multiple case study with four ethnic German teachers who work in one culturally and ethnically diverse high school in Berlin, using classroom video observations and post-observation interviews.
In the final chapter (Chapter 6), I summarised the main findings of the systematic literature review and three empirical studies, and discuss their scientific and practical implications.
This dissertation makes a significant contribution to the field of educational science to understanding culturally responsive teaching in terms of its measurement, focus on both beliefs and practices and the link between the two, and theoretical, practical, and future study implications.
The Ram Bible (Tanakh Ram) is a recently-published Bible edition printed in two columns: the right-hand column features the original biblical Hebrew text and the lefthand column features the translation of the Bible into a high-register literary Israeli (Reclaimed Hebrew). The Ram Bible edition has gained impressive academic and popular attention. This paper looks at differences between academics, teachers, students, media personalities and senior officials in the education system, regarding their attitude to the Ram Bible. Our study reveals that Bible teachers and students who make frequent use of this edition understand its contribution to comprehending the biblical language, stories, and ideas. Opponents of Ram Bible are typically administrators and theoretician scholars who advocate the importance of teaching the Bible but do not actually teach it themselves. We argue that the fundamental difference between biblical Hebrew and Israeli makes the Hebrew Bible incomprehensible to native Israeli speakers. We explain the advantages of employing tools such as the Ram Bible.
The impact of the orientation of zwitterionic groups, with respect to the polymer backbone, on the antifouling performance of thin hydrogel films made of polyzwitterions is explored. In an extension of the recent discussion about differences in the behavior of polymeric phosphatidylcholines and choline phosphates, a quasi-isomeric set of three poly(sulfobetaine methacrylate)s is designed for this purpose. The design is based on the established monomer 3-[N-2-(methacryloyloxy)ethyl-N,N-dimethyl]ammonio-propane-1-sulfonate and two novel sulfobetaine methacrylates, in which the positions of the cationic and the ionic groups relative to the polymerizable group, and thus also to the polymer backbone, are altered. The effect of the varied segmental dipole orientation on their water solubility, wetting behavior by water, and fouling resistance is compared. As model systems, the adsorption of the model proteins bovine serum albumin (BSA), fibrinogen, and lysozyme onto films of the various polyzwitterion surfaces is studied, as well as the settlement of a diatom (Navicula perminuta) and barnacle cyprids (Balanus improvisus) as representatives of typical marine fouling communities. The results demonstrate the important role of the zwitterionic group's orientation on the polymer behavior and fouling resistance
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
Oscillatory systems under weak coupling can be described by the Kuramoto model of phase oscillators. Kuramoto phase oscillators have diverse applications ranging from phenomena such as communication between neurons and collective influences of political opinions, to engineered systems such as Josephson Junctions and synchronized electric power grids. This thesis includes the author's contribution to the theoretical framework of coupled Kuramoto oscillators and to the understanding of non-trivial N-body dynamical systems via their reduced mean-field dynamics.
The main content of this thesis is composed of four parts. First, a partially integrable theory of globally coupled identical Kuramoto oscillators is extended to include pure higher-mode coupling. The extended theory is then applied to a non-trivial higher-mode coupled model, which has been found to exhibit asymmetric clustering. Using the developed theory, we could predict a number of features of the asymmetric clustering with only information of the initial state provided.
The second part consists of an iterated discrete-map approach to simulate phase dynamics. The proposed map --- a Moebius map --- not only provides fast computation of phase synchronization, it also precisely reflects the underlying group structure of the dynamics. We then compare the iterated-map dynamics and various analogous continuous-time dynamics. We are able to replicate known phenomena such as the synchronization transition of the Kuramoto-Sakaguchi model of oscillators with distributed natural frequencies, and chimera states for identical oscillators under non-local coupling.
The third part entails a particular model of repulsively coupled identical Kuramoto-Sakaguchi oscillators under common random forcing, which can be shown to be partially integrable. Via both numerical simulations and theoretical analysis, we determine that such a model cannot exhibit stationary multi-cluster states, contrary to the numerical findings in previous literature. Through further investigation, we find that the multi-clustering states reported previously occur due to the accumulation of discretization errors inherent in the integration algorithms, which introduce higher-mode couplings into the model. As a result, the partial integrability condition is violated.
Lastly, we derive the microscopic cross-correlation of globally coupled non-identical Kuramoto oscillators under common fluctuating forcing. The effect of correlation arises naturally in finite populations, due to the non-trivial fluctuations of the meanfield. In an idealized model, we approximate the finite-sized fluctuation by a Gaussian white noise. The analytical approximation qualitatively matches the measurements in numerical experiments, however, due to other periodic components inherent in the fluctuations of the mean-field there still exist significant inconsistencies.
Surface modification by polyzwitterions of the sulfabetaine-type, and their resistance to biofouling
(2019)
Films of zwitterionic polymers are increasingly explored for conferring fouling resistance to materials. Yet, the structural diversity of polyzwitterions is rather limited so far, and clear structure-property relationships are missing. Therefore, we synthesized a series of new polyzwitterions combining ammonium and sulfate groups in their betaine moieties, so-called poly(sulfabetaine)s. Their chemical structures were varied systematically, the monomers carrying methacrylate, methacrylamide, or styrene moieties as polymerizable groups. High molar mass homopolymers were obtained by free radical polymerization. Although their solubilities in most solvents were very low, brine and lower fluorinated alcohols were effective solvents in most cases. A set of sulfabetaine copolymers containing about 1 mol % (based on the repeat units) of reactive benzophenone methacrylate was prepared, spin-coated onto solid substrates, and photo-cured. The resistance of these films against the nonspecific adsorption by two model proteins (bovine serum albumin—BSA, fibrinogen) was explored, and directly compared with a set of references. The various polyzwitterions reduced protein adsorption strongly compared to films of poly(n-butyl methacrylate) that were used as a negative control. The poly(sulfabetaine)s showed generally even somewhat higher anti-fouling activity than their poly(sulfobetaine) analogues, though detailed efficacies depended on the individual polymer–protein pairs. Best samples approach the excellent performance of a poly(oligo(ethylene oxide) methacrylate) reference.
Nuclear lamins are nucleus-specific intermediate filaments (IF) found at the inner nuclear membrane (INM) of the nuclear envelope (NE). Together with nuclear envelope transmembrane proteins, they form the nuclear lamina and are crucial for gene regulation and mechanical robustness of the nucleus and the whole cell. Recently, we characterized Dictyostelium NE81 as an evolutionarily conserved lamin-like protein, both on the sequence and functional level. Here, we show on the structural level that the Dictyostelium NE81 is also capable of assembling into filaments, just as metazoan lamin filament assemblies. Using field-emission scanning electron microscopy, we show that NE81 expressed in Xenopous oocytes forms filamentous structures with an overall appearance highly reminiscent of Xenopus lamin B2. The in vitro assembly properties of recombinant His-tagged NE81 purified from Dictyostelium extracts are very similar to those of metazoan lamins.
Super-resolution stimulated emission depletion (STED) and expansion microscopy (ExM), as well as transmission electron microscopy of negatively stained purified NE81, demonstrated its capability of forming filamentous structures under low-ionic-strength conditions. These results recommend Dictyostelium as a non-mammalian model organism with a well-characterized nuclear envelope involving all relevant protein components known in animal cells.
Peroxisome biogenesis disorders (PBDs) are nontreatable hereditary diseases with a broad range of severity. Approximately 65% of patients are affected by mutations in the peroxins Pex1 and Pex6. The proteins form the heteromeric Pex1/Pex6 complex, which is important for protein import into peroxisomes. To date, no structural data are available for this AAA+ ATPase complex. However, a wealth of information can be transferred from low-resolution structures of the yeast scPex1/scPex6 complex and homologous, well-characterized AAA+ ATPases. We review the abundant records of missense mutations described in PBD patients with the aim to classify and rationalize them by mapping them onto a homology model of the human Pex1/Pex6 complex. Several mutations concern functionally conserved residues that are implied in ATP hydrolysis and substrate processing. Contrary to fold destabilizing mutations, patients suffering from function-impairing mutations may not benefit from stabilizing agents, which have been reported as potential therapeutics for PBD patients.
Carbonate-rich silicate and carbonate melts play a crucial role in deep Earth magmatic processes and their melt structure is a key parameter, as it controls physical and chemical properties. Carbonate-rich melts can be strongly enriched in geochemically important trace elements. The structural incorporation mechanisms of these elements are difficult to study because such melts generally cannot be quenched to glasses, which are usually employed for structural investigations. This thesis investigates the influence of CO2 on the local environments of trace elements contained in silicate glasses with variable CO2 concentrations as well as in silicate and carbonate melts. The compositions studied include sodium-rich peralkaline silicate melts and glasses and carbonate melts similar to those occurring naturally at Oldoinyo Lengai volcano, Tanzania.
The local environments of the three elements yttrium (Y), lanthanum (La) and strontium (Sr) were investigated in synthesized glasses and melts using X-ray absorption fine structure (XAFS) spectroscopy. Especially extended X-ray absorption fine structure spectroscopy (EXAFS) provides element specific information on local structure, such as bond lengths, coordination numbers and the degree of disorder. To cope with the enhanced structural disorder present in glasses and melts, EXAFS analysis was based on fitting approaches using an asymmetric distribution function as well as a correlation model according to bond valence theory. Firstly, silicate glasses quenched from high pressure/temperature melts with up to 7.6 wt % CO2 were investigated. In strongly and extremely peralkaline glasses the local structure of Y is unaffected by the CO2 content (with oxygen bond lengths of ~ 2.29 Å). Contrary, the bond lengths for Sr-O and La-O increase with increasing CO2 content in the strongly peralkaline glasses from ~ 2.53 to ~ 2.57 Å and from ~ 2.52 to ~ 2.54 Å, respectively, while they remain constant in extremely peralkaline glasses (at ~ 2.55 Å and 2.54 Å, respectively). Furthermore, silicate and unquenchable carbonate melts were investigated in-situ at high pressure/temperature conditions (2.2 to 2.6 GPa, 1200 to 1500 °C) using a Paris-Edinburgh press. A novel design of the pressure medium assembly for this press was developed, which features increased mechanical stability as well as enhanced transmittance at relevant energies to allow for low content element EXAFS in transmission. Compared to glasses the bond lengths of Y-O, La-O and Sr-O are elongated by up to + 3 % in the melt and exhibit higher asymmetric pair distributions. For all investigated silicate melt compositions Y-O bond lengths were found constant at ~ 2.37 Å, while in the carbonate melt the Y-O length increases slightly to 2.41 Å. The La-O bond lengths in turn, increase systematically over the whole silicate – carbonate melt joint from 2.55 to 2.60 Å. Sr-O bond lengths in melts increase from ~ 2.60 to 2.64 Å from pure silicate to silicate-bearing carbonate composition with constant elevated bond length within the carbonate region.
For comparison and deeper insight, glass and melt structures of Y and Sr bearing sodium-rich silicate to carbonate compositions were simulated in an explorative ab initio molecular dynamics (MD) study. The simulations confirm observed patterns of CO2-dependent local changes around Y and Sr and additionally provide further insights into detailed incorporation mechanisms of the trace elements and CO2. Principle findings include that in sodium-rich silicate compositions carbon either is mainly incorporated as a free carbonate-group or shares one oxygen with a network former (Si or [4]Al) to form a non-bridging carbonate. Of minor importance are bridging carbonates between two network formers. Here, a clear preference for two [4]Al as adjacent network formers occurs, compared to what a statistical distribution would suggest. In C-bearing silicate melts minor amounts of molecular CO2 are present, which is almost totally dissolved as carbonate in the quenched glasses.
The combination of experiment and simulation provides extraordinary insights into glass and melt structures. The new data is interpreted on the basis of bond valence theory and is used to deduce potential mechanisms for structural incorporation of investigated elements, which allow for prediction on their partitioning behavior in natural melts. Furthermore, it provides unique insights into the dissolution mechanisms of CO2 in silicate melts and into the carbonate melt structure. For the latter, a structural model is suggested, which is based on planar CO3-groups linking 7- to 9-fold cation polyhedra, in accordance to structural units as found in the Na-Ca carbonate nyerereite. Ultimately, the outcome of this study contributes to rationalize the unique physical properties and geological phenomena related to carbonated silicate-carbonate melts.
Fold and thrust belts are characteristic features of collisional orogen that grow laterally through time by deforming the upper crust in response to stresses caused by convergence. The deformation propagation in the upper crust is accommodated by shortening along major folds and thrusts. The formation of these structures is influenced by the mechanical strength of décollements, basement architecture, presence of preexisting structures and taper of the wedge. These factors control not only the sequence of deformation but also cause differences in the structural style.
The Himalayan fold and thrust belt exhibits significant differences in the structural style from east to west. The external zone of the Himalayan fold and thrust belt, also called the Subhimalaya, has been extensively studied to understand the temporal development and differences in the structural style in Bhutan, Nepal and India; however, the Subhimalaya in Pakistan remains poorly studied. The Kohat and Potwar fold and thrust belts (herein called Kohat and Potwar) represent the Subhimalaya in Pakistan. The Main Boundary Thrust (MBT) marks the northern boundary of both Kohat and Potwar, showing that these belts are genetically linked to foreland-vergent deformation within the Himalayan orogen, despite the pronounced contrast in structural style. This contrast becomes more pronounced toward south, where the active strike-slip Kalabagh Fault Zone links with the Kohat and Potwar range fronts, known as the Surghar Range and the Salt Range, respectively. The Surghar and Salt Ranges developed above the Surghar Thrust (SGT) and Main Frontal Thrust (MFT). In order to understand the structural style and spatiotemporal development of the major structures in Kohat and Potwar, I have used structural modeling and low temperature thermochronolgy methods in this study. The structural modeling is based on construction of balanced cross-sections by integrating surface geology, seismic reflection profiles and well data. In order to constrain the timing and magnitude of exhumation, I used apatite (U-Th-Sm)/He (AHe) and apatite fission track (AFT) dating. The results obtained from both methods are combined to document the Paleozoic to Recent history of Kohat and Potwar.
The results of this research suggest two major events in the deformation history. The first major deformation event is related to Late Paleozoic rifting associated with the development of the Neo-Tethys Ocean. The second major deformation event is related to the Late Miocene to Pliocene development of the Himalayan fold and thrust belt in the Kohat and Potwar. The Late Paleozoic rifting is deciphered by inverse thermal modelling of detrital AFT and AHe ages from the Salt Range. The process of rifting in this area created normal faulting that resulted in the exhumation/erosion of Early to Middle Paleozoic strata, forming a major unconformity between Cambrian and Permian strata that is exposed today in the Salt Range. The normal faults formed in Late Paleozoic time played an important role in localizing the Miocene-Pliocene deformation in this area. The combination of structural reconstructions and thermochronologic data suggest that deformation initiated at 15±2 Ma on the SGT ramp in the southern part of Kohat. The early movement on the SGT accreted the foreland into the Kohat deforming wedge, forming the range front. The development of the MBT at 12±2 Ma formed the northern boundary of Kohat and Potwar. Deformation propagated south of the MBT in the Kohat on double décollements and in the Potwar on a single basal décollement. The double décollement in the Kohat adopted an active roof-thrust deformation style that resulted in the disharmonic structural style in the upper and lower parts of the stratigraphic section. Incremental shortening resulted in the development of duplexes in the subsurface between two décollements and imbrication above the roof thrust. Tectonic thickening caused by duplexes resulted in cooling and exhumation above the roof thrust by removal of a thick sequence of molasse strata. The structural modelling shows that the ramps on which duplexes formed in Kohat continue as tip lines of fault propagation folds in the Potwar. The absence of a double décollement in the Potwar resulted in the preservation of a thick sequence of molasse strata there. The temporal data suggest that deformation propagated in-sequence from ~ 8 to 3 Ma in the northern part of Kohat and Potwar; however, internal deformation in the Kohat was more intense, probably required for maintaining a critical taper after a significant load was removed above the upper décollement. In the southern part of Potwar, a steeper basement slope (β≥3°) and the presence of salt at the base of the stratigraphic section allowed for the complete preservation of the stratigraphic wedge, showcased by very little internal deformation. Activation of the MFT at ~4 Ma allowed the Salt Range to become the range front of the Potwar. The removal of a large amount of molasse strata above the MFT ramp enhanced the role of salt in shaping the structural style of the Salt Range and Kalabagh Fault Zone. Salt accumulation and migration resulted in the formation of normal faults in both areas. Salt migration in the Kalabagh fault zone has triggered out-of-sequence movement on ramps in the Kohat.
The amount of shortening calculated between the MBT and the SGT in Kohat is 75±5 km and between the MBT and the MFT in Potwar is 65±5 km. A comparable amount of shortening is accommodated in the Kohat and Potwar despite their different widths: 70 km Kohat and 150 km Potwar. In summary, this research suggests that deformation switched between different structures during the last ~15 Ma through different modes of fault propagation, resulting in different structural styles and the out-of-sequence development of Kohat and Potwar.
Textbook concepts of diffusion-versus kinetic-control are well-defined for reaction-kinetics involving macroscopic concentrations of diffusive reactants that are adequately described by rate-constants—the inverse of the mean-first-passage-time to the reaction-event. In contradiction, an open important question is whether the mean-first-passage-time alone is a sufficient measure for biochemical reactions that involve nanomolar reactant concentrations. Here, using a simple yet generic, exactly solvable model we study the effect of diffusion and chemical reaction-limitations on the full reaction-time distribution. We show that it has a complex structure with four distinct regimes delineated by three characteristic time scales spanning a window of several decades. Consequently, the reaction-times are defocused: no unique time-scale characterises the reaction-process, diffusion- and kinetic-control can no longer be disentangled, and it is imperative to know the full reaction-time distribution. We introduce the concepts of geometry- and reaction-control, and also quantify each regime by calculating the corresponding reaction depth.
Predator-prey interactions provide central links in food webs. These interaction are directly or indirectly impacted by a number of factors. These factors range from physiological characteristics of individual organisms, over specifics of their interaction to impacts of the environment. They may generate the potential for the application of different strategies by predators and prey. Within this thesis, I modelled predator-prey interactions and investigated a broad range of different factors driving the application of certain strategies, that affect the individuals or their populations. In doing so, I focused on phytoplankton-zooplankton systems as established model systems of predator-prey interactions.
At the level of predator physiology I proposed, and partly confirmed, adaptations to fluctuating availability of co-limiting nutrients as beneficial strategies. These may allow to store ingested nutrients or to regulate the effort put into nutrient assimilation. We found that these two strategies are beneficial at different fluctuation frequencies of the nutrients, but may positively interact at intermediate frequencies. The corresponding experiments supported our model results. We found that the temporal structure of nutrient fluctuations indeed has strong effects on the juvenile somatic growth rate of {\itshape Daphnia}.
Predator colimitation by energy and essential biochemical nutrients gave rise to another physiological strategy. High-quality prey species may render themselves indispensable in a scenario of predator-mediated coexistence by being the only source of essential biochemical nutrients, such as cholesterol. Thereby, the high-quality prey may even compensate for a lacking defense and ensure its persistence in competition with other more defended prey species.
We found a similar effect in a model where algae and bacteria compete for nutrients. Now, being the only source of a compound that is required by the competitor (bacteria) prevented the competitive exclusion of the algae. In this case, the essential compounds were the organic carbon provided by the algae. Here again, being indispensable served as a prey strategy that ensured its coexistence.
The latter scenario also gave rise to the application of the two metabolic strategies of autotrophy and heterotrophy by algae and bacteria, respectively. We found that their coexistence allowed the recycling of resources in a microbial loop that would otherwise be lost. Instead, these resources were made available to higher trophic levels, increasing the trophic transfer efficiency in food webs.
The predation process comprises the next higher level of factors shaping the predator-prey interaction, besides these factors that originated from the functioning or composition of individuals. Here, I focused on defensive mechanisms and investigated multiple scenarios of static or adaptive combinations of prey defense and predator offense. I confirmed and extended earlier reports on the coexistence-promoting effects of partially lower palatability of the prey community. When bacteria and algae are coexisting, a higher palatability of bacteria may increase the average predator biomass, with the side effect of making the population dynamics more regular. This may facilitate experimental investigations and interpretations. If defense and offense are adaptive, this allows organisms to maximize their growth rate. Besides this fitness-enhancing effect, I found that co-adaptation may provide the predator-prey system with the flexibility to buffer external perturbations.
On top of these rather internal factors, environmental drivers also affect predator-prey interactions. I showed that environmental nutrient fluctuations may create a spatio-temporal resource heterogeneity that selects for different predator strategies. I hypothesized that this might favour either storage or acclimation specialists, depending on the frequency of the environmental fluctuations.
We found that many of these factors promote the coexistence of different strategies and may therefore support and sustain biodiversity. Thus, they might be relevant for the maintenance of crucial ecosystem functions that also affect us humans. Besides this, the richness of factors that impact predator-prey interactions might explain why so many species, especially in the planktonic regime, are able to coexist.
Being ignorant of key aspects of a strategic interaction can represent an advantage rather than a handicap. We study one particular context in which ignorance can be beneficial: iterated strategic interactions in which voluntary cooperation may be sustained into the final round if players voluntarily forego knowledge about the time horizon. We experimentally examine this option to remain ignorant about the time horizon in a finitely repeated two-person prisoners’ dilemma game. We confirm that pairs without horizon knowledge avoid the drop in cooperation that otherwise occurs toward the end of the game. However, this effect is superposed by cooperation declining more rapidly in pairs without horizon knowledge during the middle phase of the game, especially if players do not know that the other player also wanted to remain ignorant of the time horizon.
Narratives are shaping our understanding of the world. They convey values and norms and point to desirable future developments. In this way, they justify and legitimize political actions and social practices. Once a narrative has emerged and this world view is supported by broad societal groups, narratives can provide powerful momentum to trigger innovation and changes in the course of action. Narratives, however, are not necessarily based on evidence and precise categories, but can instead be vague and ambiguous in order to be acceptable and attractive to different actors. However, the more open and inclusive a narrative is, the less impact can be expected. We investigate whether there is a shared narrative in research for the sustainable economy and how this can be evaluated in terms of its potential societal impact. The paper carves out the visions for the future that have been underlying the research projects conducted within the German Federal Ministry of Education and Research (BMBF) funding programme "The Sustainable Economy". It then analyzes whether these visions are compatible with narratives dominating societal discourse on the sustainable economy, and concludes how the use of visions and narratives in research can contribute to fostering societal transformations.
In this study, we analyze the forecast accuracy and profitability of buy recommendations published in five major German financial magazines for private households based on fundamental analysis. The results show a high average forecast accuracy but with a very high standard deviation, which indicates poor forecast accuracy with regard to individual stocks. The recommendation profitability slightly exceeds the performance of the MSCI World index. Considering the involved risk, which is represented by a high standard deviation, the excess returns appear to be insufficient.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean–Vlasov equations as the starting point to derive ensemble Kalman–Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
This dissertation investigates the impact of the economic and fiscal crisis starting in 2008 on EU climate policy-making. While the overall number of adopted greenhouse gas emission reduction policies declined in the crisis aftermath, EU lawmakers decided to introduce new or tighten existing regulations in some important policy domains. Existing knowledge about the crisis impact on EU legislative decision-making cannot explain these inconsistencies. In response, this study develops an actor-centred conceptual framework based on rational choice institutionalism that provides a micro-level link to explain how economic crises translate into altered policy-making patterns. The core theoretical argument draws on redistributive conflicts, arguing that tensions between ‘beneficiaries’ and ‘losers’ of a regulatory initiative intensify during economic crises and spill over to the policy domain. To test this hypothesis and using social network analysis, this study analyses policy processes in three case studies: The introduction of carbon dioxide emission limits for passenger cars, the expansion of the EU Emissions Trading System to aviation, and the introduction of a regulatory framework for biofuels. The key finding is that an economic shock causes EU policy domains to polarise politically, resulting in intensified conflict and more difficult decision-making. The results also show that this process of political polarisation roots in the industry that is the subject of the regulation, and that intergovernmental bargaining among member states becomes more important, but also more difficult in times of crisis.
Lake sediments are increasingly explored as reliable paleoflood archives. In addition to established flood proxies including detrital layer thickness, chemical composition, and grain size, we explore stable oxygen and carbon isotope data as paleoflood proxies for lakes in catchments with carbonate bedrock geology. In a case study from Lake Mondsee (Austria), we integrate high-resolution sediment trapping at a proximal and a distal location and stable isotope analyses of varved lake sediments to investigate flood-triggered detrital sediment flux. First, we demonstrate a relation between runoff, detrital sediment flux, and isotope values in the sediment trap record covering the period 2011-2013 CE including 22 events with daily (hourly) peak runoff ranging from 10 (24) m(3) s(-1) to 79 (110) m(3) s(-1). The three- to ten-fold lower flood-triggered detrital sediment deposition in the distal trap is well reflected by attenuated peaks in the stable isotope values of trapped sediments. Next, we show that all nine flood-triggered detrital layers deposited in a sediment record from 1988 to 2013 have elevated isotope values compared with endogenic calcite. In addition, even two runoff events that did not cause the deposition of visible detrital layers are distinguished by higher isotope values. Empirical thresholds in the isotope data allow estimation of magnitudes of the majority of floods, although in some cases flood magnitudes are overestimated because local effects can result in too-high isotope values. Hence we present a proof of concept for stable isotopes as reliable tool for reconstructing flood frequency and, although with some limitations, even for flood magnitudes.
The development of phonological awareness, the knowledge of the structural combinatoriality of a language, has been widely investigated in relation to reading (dis)ability across languages. However, the extent to which knowledge of phonemic units may interact with spoken language organization in (transparent) alphabetical languages has hardly been investigated. The present study examined whether phonemic awareness correlates with coarticulation degree, commonly used as a metric for estimating the size of children’s production units. A speech production task was designed to test for developmental differences in intra-syllabic coarticulation degree in 41 German children from 4 to 7 years of age. The technique of ultrasound imaging allowed for comparing the articulatory foundations of children’s coarticulatory patterns. Four behavioral tasks assessing various levels of phonological awareness from large to small units and expressive vocabulary were also administered. Generalized additive modeling revealed strong interactions between children’s vocabulary and phonological awareness with coarticulatory patterns. Greater knowledge of sub-lexical units was associated with lower intra-syllabic coarticulation degree and greater differentiation of articulatory gestures for individual segments. This interaction was mostly nonlinear: an increase in children’s phonological proficiency was not systematically associated with an equivalent change in coarticulation degree. Similar findings were drawn between vocabulary and coarticulatory patterns. Overall, results suggest that the process of developing spoken language fluency involves dynamical interactions between cognitive and speech motor domains. Arguments for an integrated-interactive approach to skill development are discussed.
Splits and Birds
(2019)
Binaries play an important role in observational and theoretical astrophysics. Since the mass and the chemical composition are key ingredients for stellar evolution, high-resolution spectroscopy is an important and necessary tool to derive those parameters to high confidence in binaries. This involves carefully measured orbital motion by the determination of radial velocity (RV) shifts and sophisticated techniques to derive the abundances of elements within the stellar atmosphere.
A technique superior to conventional cross-correlation methods to determine RV shifts in known as spectral disentangling. Hence, a major task of this thesis was the design of a sophisticated software package for this approach. In order to investigate secondary effects, such as flux and line-profile variations, imprinting changes on the spectrum the behavior of spectral disentangling on such variability is a key to understand the derived values, to improve them, and to get information about the variability itself. Therefore, the spectral disentangling code presented in this thesis and available to the community combines multiple advantages: separation of the spectra for detailed chemical analysis, derivation of orbital elements, derivation of individual RVs in order to investigate distorted systems (either by third body interaction or relativistic effects), the suppression of telluric contaminations, the derivation of variability, and the possibility to apply the technique to eclipsing binaries (important for orbital inclination) or in general to systems that undergo flux-variations.
This code in combination with the spectral synthesis codes MOOG and SME was used in order to derive the carbon 12C/13C isotope ratio (CIR) of the benchmark binary Capella. The observational result will be set into context with theoretical evolution by the use of MESA models and resolves the discrepancy of theory and observations existing since the first measurement of Capella's CIR in 1976.
The spectral disentangling code has been made available to the community and its applicability to completely different behaving systems, Wolf-Rayet stars, have also been investigated and resulted in a published article.
Additionally, since this technique relies strongly on data quality, continues development of scientific instruments to achieve best observational data is of great importance in observational astrophysics. That is the reason why there has also been effort in astronomical instrumentation during the work on this thesis.
Speaking the unspeakable
(2019)
This article discusses the filmic representation of the infamous Wannsee Conference, when fifteen senior German officials met at a villa on the shore of a Berlin lake to discuss and co-ordinate the implementation of the so-called final solution to the Jewish question. The understanding reached during the course of the ninety-minute meeting cleared the way for the Europe-wide killing of six million Jews. The article sets out to answer the principal challenge facing anyone attempting to recreate the Wannsee Conference on film: what was the atmosphere of this conference and the attitude of the participants? Moreover, it discusses various ethical aspects related to the portrayal of evil, not in actions but in words, using the medium of film. In doing so, it focuses on the BBC/HBO television film Conspiracy (2001), directed by Frank Pierson, probing its historical accuracy and discussing its artistic credibility.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.