Refine
Has Fulltext
- yes (395) (remove)
Year of publication
- 2019 (395) (remove)
Document Type
- Postprint (204)
- Doctoral Thesis (103)
- Article (37)
- Working Paper (31)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Part of Periodical (3)
- Report (2)
- Review (2)
- Bachelor Thesis (1)
Language
- English (395) (remove)
Keywords
- morphology (26)
- Informationsstruktur (24)
- Morphologie (24)
- information structure (24)
- linguistics (24)
- syntax (24)
- Festschrift (23)
- Linguistik (23)
- Syntax (23)
- festschrift (23)
Institute
- Institut für Biochemie und Biologie (46)
- Department Linguistik (37)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Institut für Geowissenschaften (35)
- Institut für Physik und Astronomie (27)
- Institut für Chemie (24)
- Strukturbereich Kognitionswissenschaften (24)
- Wirtschaftswissenschaften (19)
- Humanwissenschaftliche Fakultät (17)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (16)
In this paper Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular (and some singular) SLPs of even orders (tested for up to eight), with a mix of (including non-separable and finite singular endpoints) boundary conditions, accurately and efficiently. The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved. The inverse SLP algorithm proposed by Barcilon (1974) is utilized in combination with the Magnus method so that a direct SLP of any (even) order and an inverse SLP of order two can be solved effectively.
Precision agriculture (PA) strongly relies on spatially differentiated sensor information. Handheld instruments based on laser-induced breakdown spectroscopy (LIBS) are a promising sensor technique for the in-field determination of various soil parameters. In this work, the potential of handheld LIBS for the determination of the total mass fractions of the major nutrients Ca, K, Mg, N, P and the trace nutrients Mn, Fe was evaluated. Additionally, other soil parameters, such as humus content, soil pH value and plant available P content, were determined. Since the quantification of nutrients by LIBS depends strongly on the soil matrix, various multivariate regression methods were used for calibration and prediction. These include partial least squares regression (PLSR), least absolute shrinkage and selection operator regression (Lasso), and Gaussian process regression (GPR). The best prediction results were obtained for Ca, K, Mg and Fe. The coefficients of determination obtained for other nutrients were smaller. This is due to much lower concentrations in the case of Mn, while the low number of lines and very weak intensities are the reason for the deviation of N and P. Soil parameters that are not directly related to one element, such as pH, could also be predicted. Lasso and GPR yielded slightly better results than PLSR. Additionally, several methods of data pretreatment were investigated.
Astandard approach to study time-dependent stochastic processes is the power spectral density (PSD), an ensemble-averaged property defined as the Fourier transform of the autocorrelation function of the process in the asymptotic limit of long observation times, T → ∞. In many experimental situations one is able to garner only relatively few stochastic time series of finite T, such that practically neither an ensemble average nor the asymptotic limit T → ∞ can be achieved. To accommodate for a meaningful analysis of such finite-length data we here develop the framework of single-trajectory spectral analysis for one of the standard models of anomalous diffusion, scaled Brownian motion.Wedemonstrate that the frequency dependence of the single-trajectory PSD is exactly the same as for standard Brownian motion, which may lead one to the erroneous conclusion that the observed motion is normal-diffusive. However, a distinctive feature is shown to be provided by the explicit dependence on the measurement time T, and this ageing phenomenon can be used to deduce the anomalous diffusion exponent.Wealso compare our results to the single-trajectory PSD behaviour of another standard anomalous diffusion process, fractional Brownian motion, and work out the commonalities and differences. Our results represent an important step in establishing singletrajectory PSDs as an alternative (or complement) to analyses based on the time-averaged mean squared displacement.
Background
Semi-natural plant communities such as field boundaries play an important ecological role in agricultural landscapes, e.g., provision of refuge for plant and other species, food web support or habitat connectivity. To prevent undesired effects of herbicide applications on these communities and their structure, the registration and application are regulated by risk assessment schemes in many industrialized countries. Standardized individual-level greenhouse experiments are conducted on a selection of crop and wild plant species to characterize the effects of herbicide loads potentially reaching off-field areas on non-target plants. Uncertainties regarding the protectiveness of such approaches to risk assessment might be addressed by assessment factors that are often under discussion. As an alternative approach, plant community models can be used to predict potential effects on plant communities of interest based on extrapolation of the individual-level effects measured in the standardized greenhouse experiments. In this study, we analyzed the reliability and adequacy of the plant community model IBC-grass (individual-based plant community model for grasslands) by comparing model predictions with empirically measured effects at the plant community level.
Results
We showed that the effects predicted by the model IBC-grass were in accordance with the empirical data. Based on the species-specific dose responses (calculated from empirical effects in monocultures measured 4 weeks after application), the model was able to realistically predict short-term herbicide impacts on communities when compared to empirical data.
Conclusion
The results presented in this study demonstrate an approach how the current standard greenhouse experiments—measuring herbicide impacts on individual-level—can be coupled with the model IBC-grass to estimate effects on plant community level. In this way, it can be used as a tool in ecological risk assessment.
Simulating the impact of herbicide drift exposure on non-target terrestrial plant communities
(2019)
In Europe, almost half of the terrestrial landscape is used for agriculture. Thus, semi-natural habitats such as field margins are substantial for maintaining diversity in intensively managed farmlands. However, plants located at field margins are threatened by agricultural practices such as the application of pesticides within the fields. Pesticides are chemicals developed to control for undesired species within agricultural fields to enhance yields. The use of pesticides implies, however, effects on non-target organisms within and outside of the agricultural fields. Non-target organisms are organisms not intended to be sprayed or controlled for. For example, plants occurring in field margins are not intended to be sprayed, however, can be impaired due to herbicide drift exposure. The authorization of plant protection products such as herbicides requires risk assessments to ensure that the application of the product has no unacceptable effects on the environment. For non-target terrestrial plants (NTTPs), the risk assessment is based on standardized greenhouse studies on plant individual level. To account for the protection of plant populations and communities under realistic field conditions, i.e. extrapolating from greenhouse studies to field conditions and from individual-level to community-level, assessment factors are applied. However, recent studies question the current risk assessment scheme to meet the specific protection goals for non-target terrestrial plants as suggested by the European Food Safety Authority (EFSA). There is a need to clarify the gaps of the current risk assessment and to include suitable higher tier options in the upcoming guidance document for non-target terrestrial plants.
In my thesis, I studied the impact of herbicide drift exposure on NTTP communities using a mechanistic modelling approach. I addressed main gaps and uncertainties of the current risk assessment and finally suggested this modelling approach as a novel higher tier option in future risk assessments. Specifically, I extended the plant community model IBC-grass (Individual-based community model for grasslands) to reflect herbicide impacts on plant individuals. In the first study, I compared model predictions of short-term herbicide impacts on artificial plant communities with empirical data. I demonstrated the capability of the model to realistically reflect herbicide impacts. In the second study, I addressed the research question whether or not reproductive endpoints need to be included in future risk assessments to protect plant populations and communities. I compared the consequences of theoretical herbicide impacts on different plant attributes for long-term plant population dynamics in the community context. I concluded that reproductive endpoints only need to be considered if the herbicide effect is assumed to be very high. The endpoints measured in the current vegetative vigour and seedling emergence studies had high impacts for the dynamic of plant populations and communities already at lower effect intensities. Finally, the third study analysed long-term impacts of herbicide application for three different plant communities. This study highlighted the suitability of the modelling approach to simulate different communities and thus detecting sensitive environmental conditions.
Overall, my thesis demonstrates the suitability of mechanistic modelling approaches to be used as higher tier options for risk assessments. Specifically, IBC-grass can incorporate available individual-level effect data of standardized greenhouse experiments to extrapolate to community-level under various environmental conditions. Thus, future risk assessments can be improved by detecting sensitive scenarios and including worst-case impacts on non-target plant communities.
The aim of this study is to monitor short-term seasonal development of young Olympic weightlifters’ anthropometry, body composition, physical fitness, and sport-specific performance. Fifteen male weightlifters aged 13.2 ± 1.3 years participated in this study. Tests for the assessment of anthropometry (e.g., body-height, body-mass), body-composition (e.g., lean-body-mass, relative fat-mass), muscle strength (grip-strength), jump performance (drop-jump (DJ) height, countermovement-jump (CMJ) height, DJ contact time, DJ reactive-strength-index (RSI)), dynamic balance (Y-balance-test), and sport-specific performance (i.e., snatch and clean-and-jerk) were conducted at different time-points (i.e., T1 (baseline), T2 (9 weeks), T3 (20 weeks)). Strength tests (i.e., grip strength, clean-and-jerk and snatch) and training volume were normalized to body mass. Results showed small-to-large increases in body-height, body-mass, lean-body-mass, and lower-limbs lean-mass from T1-to-T2 and T2-to-T3 (∆0.7–6.7%; 0.1 ≤ d ≤ 1.2). For fat-mass, a significant small-sized decrease was found from T1-to-T2 (∆13.1%; d = 0.4) and a significant increase from T2-to-T3 (∆9.1%; d = 0.3). A significant main effect of time was observed for DJ contact time (d = 1.3) with a trend toward a significant decrease from T1-to-T2 (∆–15.3%; d = 0.66; p = 0.06). For RSI, significant small increases from T1-to-T2 (∆9.9%, d = 0.5) were noted. Additionally, a significant main effect of time was found for snatch (d = 2.7) and clean-and-jerk (d = 3.1) with significant small-to-moderate increases for both tests from T1-to-T2 and T2-to-T3 (∆4.6–11.3%, d = 0.33 to 0.64). The other tests did not change significantly over time (0.1 ≤ d ≤ 0.8). Results showed significantly higher training volume for sport-specific training during the second period compared with the first period (d = 2.2). Five months of Olympic weightlifting contributed to significant changes in anthropometry, body-composition, and sport-specific performance. However, hardly any significant gains were observed for measures of physical fitness. Coaches are advised to design training programs that target a variety of fitness components to lay an appropriate foundation for later performance as an elite athlete.
The growing global demand for meat is being thwarted by shrinking agricultural areas, and opposes efforts to mitigate methane emissions and to improve public health. Cultured meat could contribute to solve these problems, but will such meat be marketable, competitive, and accepted? Using the Delphi method, this study explored the potential development of cultured meat by 2027. Despite the acknowledged urgency to develop sustainable meat alternatives, participants doubt that challenges regarding mass production, production costs, and consumer acceptance will be overcome by 2027. Considering the noticeable impacts of global warming, further research and development as well as a change in consumer perceptions is inevitable.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
Hantavirus assembly and budding are governed by the surface glycoproteins Gn and Gc. In this study, we investigated the glycoproteins of Puumala, the most abundant Hantavirus species in Europe, using fluorescently labeled wild-type constructs and cytoplasmic tail (CT) mutants. We analyzed their intracellular distribution, co-localization and oligomerization, applying comprehensive live, single-cell fluorescence techniques, including confocal microscopy, imaging flow cytometry, anisotropy imaging and Number&Brightness analysis. We demonstrate that Gc is significantly enriched in the Golgi apparatus in absence of other viral components, while Gn is mainly restricted to the endoplasmic reticulum (ER). Importantly, upon co-expression both glycoproteins were found in the Golgi apparatus. Furthermore, we show that an intact CT of Gc is necessary for efficient Golgi localization, while the CT of Gn influences protein stability. Finally, we found that Gn assembles into higher-order homo-oligomers, mainly dimers and tetramers, in the ER while Gc was present as mixture of monomers and dimers within the Golgi apparatus. Our findings suggest that PUUV Gc is the driving factor of the targeting of Gc and Gn to the Golgi region, while Gn possesses a significantly stronger self-association potential.
This paper focuses on one particular issue which has arisen in the course of the ongoing debate on the reform of investor-State dispute settlement (ISDS), namely that of the appointment of arbitrators. Taking as its starting point that there now exists tentative consensus that the present system for the appointment of arbitrators either causes or exacerbates certain problematic aspects of the current ISDS system, the paper explores one option for reform, namely the introduction of an independent panel for the selection of investment arbitrators. In doing so, it is argued that a shift in the normative basis of the rules governing appointments is required in order to accommodate the principles of party autonomy and the international rule of law. Such reform, while not completely removing the initiative that parties presently enjoy, is the most efficient way to introduce rule of law considerations such as a measure of judicial independence into the current appointments system. This, it is argued, would in turn help to address some of the problematic features of the appointment of arbitrators in ISDS.
A growing demand for natural resources embedded in current changes of the international order will put pressure on states to secure the future availability of these resources. Some political discourses suggest that states might respond by challenging the foundations of international law. Whereas the UN Charter was inter alia aimed at eliminating uses of force for economic reasons, one may observe an on-going trend of securitization of matters of resource supply resulting into the revival of self-preservation doctrines. The chapter will show that those claims lack a normative foundation in the current framework of the prohibition of the use of force. Moreover, international law has sufficient instruments to cope with disputes over access to resources by other means than the use of force. The international community, therefore, must oppose claims that may contribute to normative uncertainties and strengthen already existing instruments of pacific settlement of disputes.
The semiarid northeast of Brazil is one of the most densely populated dryland regions in the world and recurrently affected by severe droughts. Thus, reliable seasonal forecasts of streamflow and reservoir storage are of high value for water managers. Such forecasts can be generated by applying either hydrological models representing underlying processes or statistical relationships exploiting correlations among meteorological and hydrological variables. This work evaluates and compares the performances of seasonal reservoir storage forecasts derived by a process-based hydrological model and a statistical approach.
Driven by observations, both models achieve similar simulation accuracies. In a hindcast experiment, however, the accuracy of estimating regional reservoir storages was considerably lower using the process-based hydrological model, whereas the resolution and reliability of drought event predictions were similar by both approaches. Further investigations regarding the deficiencies of the process-based model revealed a significant influence of antecedent wetness conditions and a higher sensitivity of model prediction performance to rainfall forecast quality.
Within the scope of this study, the statistical model proved to be the more straightforward approach for predictions of reservoir level and drought events at regionally and monthly aggregated scales. However, for forecasts at finer scales of space and time or for the investigation of underlying processes, the costly initialisation and application of a process-based model can be worthwhile. Furthermore, the application of innovative data products, such as remote sensing data, and operational model correction methods, like data assimilation, may allow for an enhanced exploitation of the advanced capabilities of process-based hydrological models.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
This paper challenges the solely rational view of the scenario technique as a strategy and foresight tool designed to cope with uncertainty by considering multiple possible future states. The paper employs an affordance-based view that allows for the identification and structuring of hidden, emergent attributes of the scenario technique beyond the intended ones. The suggested framework distinguishes between affordances (1) that are intended by the organization and relate to its goals, (2) that emergently generate organizational benefits, and (3) that do not relate to organizational but individual interests. Also, constraints in the use of scenarios are discussed. Affordance theory’s specific lens shows that the emergence of such attributes depends on the users’ specific intentions.
Excellent conversion efficiencies of over 20% and facile cell production have placed hybrid perovskites at the forefront of novel solar cell materials, with CH3NH3PbI3 being an archetypal compound. The question why CH3NH3PbI3 has such extraordinary characteristics, particularly a very efficient power conversion from absorbed light to electrical power, is hotly debated, with ferroelectricity being a promising candidate. This does, however, require the crystal structure to be non-centrosymmetric and we herein present crystallographic evidence as to how the symmetry breaking occurs on a crystallographic and, therefore, long-range level. Although the molecular cation CH3NH3+ is intrinsically polar, it is heavily disordered and this cannot be the sole reason for the ferroelectricity. We show that it, nonetheless, plays an important role, as it distorts the neighboring iodide positions from their centrosymmetric positions.
Role of GDF15 in active lifestyle induced metabolic adaptations and acute exercise response in mice
(2019)
Physical activity is an important contributor to muscle adaptation and metabolic health. Growth differentiation factor 15 (GDF15) is established as cellular and nutritional stress-induced cytokine but its physiological role in response to active lifestyle or acute exercise is unknown. Here, we investigated the metabolic phenotype and circulating GDF15 levels in lean and obese male C57BI/6J mice with long-term voluntary wheel running (VWR) intervention. Additionally, treadmill running capacity and exercise-induced muscle gene expression was examined in GDF15-ablated mice. Active lifestyle mimic via VWR improved treadmill running performance and, in obese mice, also metabolic phenotype. The post-exercise induction of skeletal muscle transcriptional stress markers was reduced by VWR. Skeletal muscle GDF15 gene expression was very low and only transiently increased post-exercise in sedentary but not in active mice. Plasma GDF15 levels were only marginally affected by chronic or acute exercise. In obese mice, VWR reduced GDF15 gene expression in different tissues but did not reverse elevated plasma GDF15. Genetic ablation of GDF15 had no effect on exercise performance but augmented the post exercise expression of transcriptional exercise stress markers (Atf3, Atf6, and Xbp1s) in skeletal muscle. We conclude that skeletal muscle does not contribute to circulating GDF15 in mice, but muscle GDF15 might play a protective role in the exercise stress response.
Infanticide, the killing of unrelated young, is widespread and frequently driven by sexual conflict. especially in mammals with exclusive maternal care, infanticide by males is common and females suffer fitness costs. Recognizing infanticide risk and adjusting offspring protection accordingly should therefore be adaptive in female mammals. Using a small mammal (Myodes glareolus) in outdoor enclosures, we investigated whether lactating mothers adjust offspring protection, and potential mate search behaviour, in response to different infanticide risk levels. We presented the scent of the litter’s sire or of a stranger male near the female’s nest, and observed female nest presence and movement by radiotracking. While both scents simulated a mating opportunity, they represented lower (sire) and higher (stranger) infanticide risk. compared to the sire treatment, females in the stranger treatment left their nest more often, showed increased activity and stayed closer to the nest, suggesting offspring protection from outside the nest through elevated alertness and vigilance. females with larger litters spent more time investigating scents and used more space in the sire but not in the stranger treatment. Thus, current investment size affected odour inspection and resource acquisition under higher risk. Adjusting nest protection and resource acquisition to infanticide risk could allow mothers to elicit appropriate (fitness-saving) counterstrategies, and thus, may be widespread.
This paper assesses the rise and decline of international rule of law in the case of non-state armed actors. Both signs of rise and signs of decline of international rule of law show in the case of non- state armed actors. Signs of rise include the expansion of coverage of international humanitarian law (IHL) and international criminal law, as well as international legal argumentation and rhetoric made by non-state armed groups. Some non-state armed actors express that they are governed by IHL in public statements or bilateral agreements with international actors, partly acknowledging universality of international humanitarian norms, and sometimes act as such. Signs of decline in the international rule of law also show – although some of them can be seen as business-as-usual – privileging of military advantage, instrumental use of international law (as justification and local interpretations), as well as conflicting understanding of IHL between local and global norms. The multiplicity of non-state actors also portends the decline of international rule of law, with the proliferation of many non-organized groups without legitimacy-seeking motivations.
On a smooth complete Riemannian spin manifold with smooth compact boundary, we demonstrate that Atiyah-Singer Dirac operator in depends Riesz continuously on perturbations of local boundary conditions The Lipschitz bound for the map depends on Lipschitz smoothness and ellipticity of and bounds on Ricci curvature and its first derivatives as well as a lower bound on injectivity radius away from a compact neighbourhood of the boundary. More generally, we prove perturbation estimates for functional calculi of elliptic operators on manifolds with local boundary conditions.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson's strong negation.
The transition from pollinator-mediated outbreeding to selfing has occurred many times in angiosperms. This is generally accompanied by a reduction in traits attracting pollinators, including reduced emission of floral scent. In Capsella, emission of benzaldehyde as a main component of floral scent has been lost in selfing C. rubella by mutation of cinnamate-CoA ligase CNL1. However, the biochemical basis and evolutionary history of this loss remain unknown, as does the reason for the absence of benzaldehyde emission in the independently derived selfer Capsella orientalis. We used plant transformation, in vitro enzyme assays, population genetics and quantitative genetics to address these questions. CNL1 has been inactivated twice independently by point mutations in C. rubella, causing a loss of enzymatic activity. Both inactive haplotypes are found within and outside of Greece, the centre of origin of C. rubella, indicating that they arose before its geographical spread. By contrast, the loss of benzaldehyde emission in C. orientalis is not due to an inactivating mutation in CNL1. CNL1 represents a hotspot for mutations that eliminate benzaldehyde emission, potentially reflecting the limited pleiotropy and large effect of its inactivation. Nevertheless, even closely related species have followed different evolutionary routes in reducing floral scent.
Modern rule of law and post-war constitutionalism are both anchored in rights-based limitations on state authority. Rule-of-law norms and principles, at both domestic and international levels, are designed to protect the freedom and dignity of the person. Given this “thick” conception of the rule of law, authoritarian practices that remove constraints on domestic political leaders and weaken mechanisms for holding them accountable necessarily erode both domestic and international rule of law. Drawing on political science research on authoritarian politics, this study identifies three core elements of authoritarian political strategies: subordination of the judiciary, suppression of independent news media and freedom of expression, and restrictions on the ability of civil society groups to organize and participate in public life. According to available data, each of these three practices has become increasingly common in recent years. This study offers a composite measure of the core authoritarian practices and uses it to identify the countries that have shown the most marked increases in authoritarianism. The spread and deepening of these authoritarian practices in diverse regimes around the world diminishes international rule of law. The conclusion argues that resurgent authoritarianism degrades international rule of law even if this is defined as the specifically post-Cold War international legal order.
Background: The distribution of pronouns varies cross-linguistically. This distribution has led to conflicting results in studies that investigated pronoun resolution in agrammatic indviduals. In the investigation of pronominal resolution, the linguistic phenomenon of "resumption" is understudied in agrammatism. The construction of pronominal resolution in Akan presents the opportunity to thoroughly examine resumption. Aims: To start, the present study examines the production of (pronominal) resumption in Akan focus constructions (who-questions and focused declaratives). Second, we explore the effect of grammatical tone on the processing of pronominal (resumption) since Akan is a tonal language. Methods & Procedures: First, we tested the ability to distinguish linguistic and non-linguistic tone in Akan agrammatic speakers. Then, we administered an elicitation task to five Akan agrammatic individuals, controlling for the structural variations in the realization of resumption: focused who-questions and declaratives with (i) only a resumptive pronoun, (ii) only a clause determiner, (iii) a resumptive pronoun and a clause determiner co-occurring, and (iv) neither a resumptive pronoun nor a clause determiner. Outcomes & Results: Tone discrimination .both for pitch and for lexical tone was unimpaired. The production task demonstrated that the production of resumptive pronouns and clause determiners was intact. However, the production of declarative sentences in derived word order was impaired; wh-object questions were relatively well-preserved. Conclusions: We argue that the problems with sentence production are highly selective: linguistic tones and resumption are intact but word order is impaired in non-canonical declarative sentences.
Restful choreographies
(2019)
Business process management has become a key instrument to organize work as many companies represent their operations in business process models. Recently, business process choreography diagrams have been introduced as part of the Business Process Model and Notation standard to represent interactions between business processes, run by different partners. When it comes to the interactions between services on the Web, Representational State Transfer (REST) is one of the primary architectural styles employed by web services today. Ideally, the RESTful interactions between participants should implement the interactions defined at the business choreography level.
The problem, however, is the conceptual gap between the business process choreography diagrams and RESTful interactions. Choreography diagrams, on the one hand, are modeled from business domain experts with the purpose of capturing, communicating and, ideally, driving the business interactions. RESTful interactions, on the other hand, depend on RESTful interfaces that are designed by web engineers with the purpose of facilitating the interaction between participants on the internet. In most cases however, business domain experts are unaware of the technology behind web service interfaces and web engineers tend to overlook the overall business goals of web services. While there is considerable work on using process models during process implementation, there is little work on using choreography models to implement interactions between business processes. This thesis addresses this research gap by raising the following research question: How to close the conceptual gap between business process choreographies and RESTful interactions? This thesis offers several research contributions that jointly answer the research question.
The main research contribution is the design of a language that captures RESTful interactions between participants---RESTful choreography modeling language. Formal completeness properties (with respect to REST) are introduced to validate its instances, called RESTful choreographies. A systematic semi-automatic method for deriving RESTful choreographies from business process choreographies is proposed. The method employs natural language processing techniques to translate business interactions into RESTful interactions. The effectiveness of the approach is shown by developing a prototypical tool that evaluates the derivation method over a large number of choreography models.
In addition, the thesis proposes solutions towards implementing RESTful choreographies. In particular, two RESTful service specifications are introduced for aiding, respectively, the execution of choreographies' exclusive gateways and the guidance of RESTful interactions.
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering ~10% of the permafrost region. Lake area loss (−1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (<10−5%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
Frailty and sarcopenia share some underlying characteristics like loss of muscle mass, low muscle strength, and low physical performance. Imaging parameters and functional examinations mainly assess frailty and sarcopenia criteria; however, these measures can have limitations in clinical settings. Therefore, finding suitable biomarkers that reflect a catabolic muscle state e.g. an elevated muscle protein turnover as suggested in frailty, are becoming more relevant concerning frailty diagnosis and risk assessment.
3-Methylhistidine (3-MH) and its ratios 3-MH-to-creatinine (3-MH/Crea) and 3 MH-to-estimated glomerular filtration rate (3-MH/eGFR) are under discussion as possible biomarkers for muscle protein turnover and might support the diagnosis of frailty. However, there is some skepticism about the reliability of 3-MH measures since confounders such as meat and fish intake might influence 3-MH plasma concentrations. Therefore, the influence of dietary habits and an intervention with white meat on plasma 3-MH was determined in young and healthy individuals. In another study, the cross-sectional associations of plasma 3-MH, 3-MH/Crea and 3-MH/eGFR with the frailty status (robust, pre-frail and frail) were investigated.
Oxidative stress (OS) is a possible contributor to frailty development, and high OS levels as well as low micronutrient levels are associated with the frailty syndrome. However, data on simultaneous measures of OS biomarkers together with micronutrients are lacking in studies including frail, pre-frail and robust individuals. Therefore, cross-sectional associations of protein carbonyls (PrCarb), 3-nitrotyrosine (3-NT) and several micronutrients with the frailty status were determined.
A validated UPLC-MS/MS (ultra-performance liquid chromatography tandem mass spectrometry) method for the simultaneous quantification of 3-MH and 1-MH (1 methylhistidine, as marker for meat and fish consumption) was presented and used for further analyses. Omnivores showed higher plasma 3-MH and 1-MH concentrations than vegetarians and a white meat intervention resulted in an increase in plasma 3-MH, 3 MH/Crea, 1-MH and 1-MH/Crea in omnivores. Elevated 3-MH and 3-MH/Crea levels declined significantly within 24 hours after this white meat intervention. Thus, 3-MH and 3-MH/Crea might be used as biomarker for muscle protein turnover when subjects did not consume meat 24 hours prior to blood samplings.
Plasma 3-MH, 3-MH/Crea and 3-MH/eGFR were higher in frail individuals than in robust individuals. Additionally, these biomarkers were positively associated with frailty in linear regression models, and higher odds to be frail were found for every increase in 3 MH and 3-MH/eGFR quintile in multivariable logistic regression models adjusted for several confounders. This was the first study using 3-MH/eGFR and it is concluded that plasma 3-MH, 3-MH/Crea and 3-MH/eGFR might be used to identify frail individuals or individuals at higher risk to be frail, and that there might be threshold concentrations or ratios to support these diagnoses.
Higher vitamin D3, lutein/zeaxanthin, γ-tocopherol, α-carotene, β-carotene, lycopene and β-cryptoxanthin concentrations and additionally lower PrCarb concentrations were found in robust compared to frail individuals in multivariate linear models. Frail subjects had higher odds to be in the lowest than in the highest tertile for vitamin D3 α-tocopherol, α-carotene, β-carotene, lycopene, lutein/zeaxanthin, and β cryptoxanthin, and had higher odds to be in the highest than in the lowest tertile for PrCarb than robust individuals in multivariate logistic regression models. Thus, a low micronutrient together with a high PrCarb status is associated with pre-frailty and frailty.
There is controversy in the literature in regards of the link between training load and injury rate. Thus, the aims of this non-interventional study were to evaluate relationships between pre-season training load with biochemical markers, injury incidence and performance during the first month of the competitive period in professional soccer players.
Non-alcoholic fatty liver diseases (NAFLD) including the severe form with steatohepatitis (NASH) are highly prevalent ailments to which no approved pharmacological treatment exists. Dietary intervention aiming at 10% weight reduction is efficient but fails due to low compliance. Increase in physical activity is an alternative that improved NAFLD even in the absence of weight reduction. The underlying mechanisms are unclear and cannot be studied in humans. Here, a rat NAFLD model was developed that reproduces many facets of the diet-induced NAFLD in humans. The impact of endurance exercise was studied in this model. Male Wistar rats received control chow or a NASH-inducing diet rich in fat, cholesterol, and fructose. Both diet groups were subdivided into a sedentary and an endurance exercise group. Animals receiving the NASH-inducing diet gained more body weight, got glucose intolerant and developed a liver pathology with steatosis, hepatocyte hypertrophy, inflammation and fibrosis typical of NAFLD or NASH. Contrary to expectations, endurance exercise did not improve the NASH activity score and even enhanced hepatic inflammation. However, endurance exercise attenuated the hepatic cholesterol overload and the ensuing severe oxidative stress. In addition, exercise improved glucose tolerance possibly in part by induction of hepatic FGF21 production.
The habilitation deals with the numerical analysis of the recurrence properties of geological and climatic processes. The recurrence of states of dynamical processes can be analysed with recurrence plots and various recurrence quantification options. In the present work, the meaning of the structures and information contained in recurrence plots are examined and described. New developments have led to extensions that can be used to describe the recurring patterns in both space and time. Other important developments include recurrence plot-based approaches to identify abrupt changes in the system's dynamics, to detect and investigate external influences on the dynamics of a system, the couplings between different systems, as well as a combination of recurrence plots with the methodology of complex networks. Typical problems in geoscientific data analysis, such as irregular sampling and uncertainties, are tackled by specific modifications and additions. The development of a significance test allows the statistical evaluation of quantitative recurrence analysis, especially for the identification of dynamical transitions. Finally, an overview of typical pitfalls that can occur when applying recurrence-based methods is given and guidelines on how to avoid such pitfalls are discussed. In addition to the methodological aspects, the application potential especially for geoscientific research questions is discussed, such as the identification and analysis of transitions in past climates, the study of the influence of external factors to ecological or climatic systems, or the analysis of landuse dynamics based on remote sensing data.
In nature as well as in the context of infection and medical applications, bacteria often have to move in highly complex environments such as soil or tissues. Previous studies have shown that bacteria strongly interact with their surroundings and are often guided by confinements. Here, we investigate theoretically how the dispersal of swimming bacteria can be augmented by microfluidic environments and validate our theoretical predictions experimentally. We consider a system of bacteria performing the prototypical run-and-tumble motion inside a labyrinth with square lattice geometry. Narrow channels between the square obstacles limit the possibility of bacteria to reorient during tumbling events to an area where channels cross. Thus, by varying the geometry of the lattice it might be possible to control the dispersal of cells. We present a theoretical model quantifying diffusive spreading of a run-and-tumble random walker in a square lattice. Numerical simulations validate our theoretical predictions for the dependence of the diffusion coefficient on the lattice geometry. We show that bacteria moving in square labyrinths exhibit enhanced dispersal as compared to unconfined cells. Importantly, confinement significantly extends the duration of the phase with strongly non-Gaussian diffusion, when the geometry of channels is imprinted in the density profiles of spreading cells. Finally, in good agreement with our theoretical findings, we observe the predicted behaviors in experiments with E. coli bacteria swimming in a square lattice labyrinth created in amicrofluidic device. Altogether, our comprehensive understanding of bacterial dispersal in a simple two-dimensional labyrinth makes the first step toward the analysis of more complex geometries relevant for real world applications.
Culturally diverse schools may constitute natural arenas for training crucial intercultural skills. We hypothesized that a classroom cultural diversity climate fostering contact and cooperation and multiculturalism, but not a climate fostering color‐evasion, would be positively related to adolescents’ intercultural competence. Adolescents in North Rhine‐Westphalia (N = 631, Mage = 13.69 years, 49% of immigrant background) and Berlin (N = 1,335, Mage = 14.69 years, 52% of immigrant background) in Germany reported their perceptions of the classroom cultural diversity climate and completed quantitative and qualitative measures assessing their intercultural competence. Multilevel structural equation models indicate that contact and cooperation, multiculturalism, and, surprisingly, also color‐evasion (as in emphasizing a common humanity), were positively related to the intercultural competence of immigrant and non‐immigrant background students. We conclude that all three aspects of the classroom climate are uniquely related to aspects of adolescents’ intercultural competence and that none of them may be sufficient on their own.
Many studies on biological and soft matter systems report the joint presence of a linear mean-squared displacement and a non-Gaussian probability density exhibiting, for instance, exponential or stretched-Gaussian tails. This phenomenon is ascribed to the heterogeneity of the medium and is captured by random parameter models such as ‘superstatistics’ or ‘diffusing diffusivity’. Independently, scientists working in the area of time series analysis and statistics have studied a class of discrete-time processes with similar properties, namely, random coefficient autoregressive models. In this work we try to reconcile these two approaches and thus provide a bridge between physical stochastic processes and autoregressive models.Westart from the basic Langevin equation of motion with time-varying damping or diffusion coefficients and establish the link to random coefficient autoregressive processes. By exploring that link we gain access to efficient statistical methods which can help to identify data exhibiting Brownian yet non-Gaussian diffusion.
Scholars of modern Jewish thought explore the hermeneutics of “translation” to describe the transference of concepts between discourses. I suggest a more radical approach – translation as transvaluation – is required. Eschewing modern tests of truth such as “the author would have accepted it” and “the author should have accepted it,” this radical form of translation is intentionally unfaithful to original meanings. However, it is not a reductionist reading or a liberating text. Instead, it is a persistent squabble depending on both source and translation for sustenance. Exploring this paradigm entails a review of three expositions of the Korah biblical narrative; three readings dedicated to keeping an eye on current events: (1) Tsene-rene (Prague, 1622), biblical prose; (2) Yaldei Yisrael Kodesh, (Tel Aviv, 1973), a secular Zionist reworking of Tsene-rene; and (3) The Jews are Coming (Israel, 2014–2017) a satirical television show.
RAA2019
(2019)
These abstracts result from the 10th International Congress on the Application of Raman Spectroscopy in Art and Archaeology held 03.09. – 07.09.2019 in Potsdam (Germany).
The RAA is an established biennial international conference series. Since the beginning in 2001, the RAA conferences promote Raman Spectroscopy and play an important role in increasing the field of its applications in art history, history, archaeology, palaeontology, conservation and restoration, museology, degradation of cultural heritage, archaeometry, etc. Furthermore, the development of new instrumentation, especially for non-invasive measurements, receives great attention.
The Congress covers all topics of Raman spectroscopic applications in art and archaeology and focuses on the following themes:
• Material characterization and degradation processes
• Conservation issues affecting cultural heritage
• Raman spectroscopy of biological and organic materials
• Surface enhanced Raman spectroscopy
• Chemometrics in Raman spectroscopy
• Development of Raman techniques
• New Raman instrumentation and applications in cultural heritage objects investigations
• Raman spectroscopy in paleontology, paleoenvironment and archaeology
While some pronouncements of expert treaty bodies have been considered ‘key catalysts’ for the development of international human rights law, others are only selectively referred to in legal practice. This article argues that the varying normative impact is due to the informal character of pronouncements. In the absence of treaty provisions specifying their legal effect, practitioners tend to rely on different factors and arguments when either drawing on or rejecting certain pronouncements. Scholars in turn face difficulties when trying to identify explanatory patterns within this diverging practice as the informal character confronts both international lawyers and international relations scholars with their respective methodological ‘blind spots’. In light of these intradisciplinary challenges, this article explores the extent as to which an interdisciplinary approach helps to assess the reasons for the varying impact of pronouncements. After analysing the factors determining their legal significance on the basis of State practice and the academic debate, this article identifies the drafting process as a factor which promises to be particularly insightful when explored from an interdisciplinary perspective and sketches out a framework for future research.
In light of the debate on the consequences of competitive contracting out of traditionally public services, this research compares two mechanisms used to allocate funds in development cooperation—direct awarding and competitive contracting out—aiming to identify their potential advantages and disadvantages.
The agency theory is applied within the framework of rational-choice institutionalism to study the institutional arrangements that surround two different money allocation mechanisms, identify the incentives they create for the behavior of individual actors in the field, and examine how these then transfer into measurable differences in managerial quality of development aid projects. In this work, project management quality is seen as an important determinant of the overall project success.
For data-gathering purposes, the German development agency, the Gesellschaft für Internationale Zusammenarbeit (GIZ), is used due to its unique way of work. Whereas the majority of projects receive funds via direct-award mechanism, there is a commercial department, GIZ International Services (GIZ IS) that has to compete for project funds.
The data concerning project management practices on the GIZ and GIZ IS projects was gathered via a web-based, self-administered survey of project team leaders. Principal component analysis was applied to reduce the dimensionality of the independent variable to total of five components of project management. Furthermore, multiple regression analysis identified the differences between the separate components on these two project types. Enriched by qualitative data gathered via interviews, this thesis offers insights into everyday managerial practices in development cooperation and identifies the advantages and disadvantages of the two allocation mechanisms.
The thesis first reiterates the responsibility of donors and implementers for overall aid effectiveness. It shows that the mechanism of competitive contracting out leads to better oversight and control of implementers, fosters deeper cooperation between the implementers and beneficiaries, and has a potential to strengthen ownership of recipient countries. On the other hand, it shows that the evaluation quality does not tremendously benefit from the competitive allocation mechanism and that the quality of the component knowledge management and learning is better when direct-award mechanisms are used. This raises questions about the lacking possibilities of actors in the field to learn about past mistakes and incorporate the finings into the future interventions, which is one of the fundamental issues of aid effectiveness. Finally, the findings show immense deficiencies in regard to oversight and control of individual projects in German development cooperation.
Temperature-memory technology was utilized to generate flat substrates with a programmable stiffness pattern from cross-linked poly(ethylene-co-vinyl acetate) substrates with cylindrical microstructures. Programmed substrates were obtained by vertical compression at temperatures in the range from 60 to 100 degrees C and subsequent cooling, whereby a flat substrate was achieved by compression at 72 degrees C, as documented by scanning electron microscopy and atomic force microscopy (AFM). AFM nanoindentation experiments revealed that all programmed substrates exhibited the targeted stiffness pattern. The presented technology for generating polymeric substrates with programmable stiffness pattern should be attractive for applications such as touchpads. optical storage, or cell instructive substrates.
Organic semiconductors are a promising class of materials. Their special properties are the particularly good absorption, low weight and easy processing into thin films. Therefore, intense research has been devoted to the realization of thin film organic solar cells (OPVs). Because of the low dielectric constant of organic semiconductors, primary excitations (excitons) are strongly bound and a type II heterojunction needs to be introduced to split these excitations into free charges. Therefore, most organic solar cells consist of at least an electron donor and electron acceptor material. For such donor acceptor systems mainly three states are relevant; the photoexcited exciton on the donor or acceptor material, the charge transfer state at the donor-acceptor interface and the charge separated state of a free electron and hole. The interplay between these states significantly determines the efficiency of organic solar cells. Due to the high absorption and the low charge carrier mobilities, the active layers are usually thin but also, exciton dissociation and free charge formation proceeds rapidely, which makes the study of carrier dynamics highly challenging.
Therefore, the focus of this work was first to install new experimental setups for the investigation of the charge carrier dynamics in complete devices with superior sensitivity and time resolution and, second, to apply these methods to prototypical photovoltaic materials to address specific questions in the field of organic and hybrid photovoltaics.
Regarding the first goal, a new setup combining transient absorption spectroscopy (TAS) and time delayed collection field (TDCF) was designed and installed in Potsdam. An important part of this work concerned the improvement of the electronic components with respect to time resolution and sensitivity. To this end, a highly sensitive amplifier for driving and detecting the device response in TDCF was developed. This system was then applied to selected organic and hybrid model systems with a particular focus on the understanding of the loss mechanisms that limit the fill factor and short circuit current of organic solar cells.
The first model system was a hybrid photovoltaic material comprising inorganic quantum dots decorated with organic ligands. Measurements with TDCF revealed fast free carrier recombination, in part assisted by traps, while bias-assisted charge extraction measurements showed high mobility. The measured parameters then served as input for a successful description of the device performance with an analytical model.
With a further improvement of the instrumentation, a second topic was the detailed analysis of non-geminate recombination in a disordered polymer:fullerene blend where an important question was the effect of disorder on the carrier dynamics. The measurements revealed that early time highly mobile charges undergo fast non-geminate recombination at the contacts, causing an apparent field dependence of free charge generation in TDCF experiments if not conducted properly. On the other hand, recombination the later time scale was determined by dispersive recombination in the bulk of the active layer, showing the characteristics of carrier dynamics in an exponential density of state distribution. Importantly, the comparison with steady state recombination data suggested a very weak impact of non-thermalized carriers on the recombination properties of the solar cells under application relevant illumination conditions.
Finally, temperature and field dependent studies of free charge generation were performed on three donor-acceptor combinations, with two donor polymers of the same material family blended with two different fullerene acceptor molecules. These particular material combinations were chosen to analyze the influence of the energetic and morphology of the blend on the efficiency of charge generation. To this end, activation energies for photocurrent generation were accurately determined for a wide range of excitation energies. The results prove that the formation of free charge is via thermalized charge transfer states and does not involve hot exciton splitting. Surprisingly, activation energies were of the order of thermal energy at room temperature. This led to the important conclusion that organic solar cells perform well not because of predominate high energy pathways but because the thermalized CT states are weakly bound. In addition, a model is introduced to interconnect the dissociation efficiency of the charge transfer state with its recombination observable with photoluminescence, which rules out a previously proposed two-pool model for free charge formation and recombination. Finally, based on the results, proposals for the further development of organic solar cells are formulated.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Presupposition triggers differ with respect to whether their presupposition is easily accommodatable. The presupposition of focus-sensitive additive particles like also or too is often classified as hard to accommodate, i.e., these triggers are infelicitous if their presupposition is not entailed by the immediate linguistic or non-linguistic context. We tested two competing accounts for the German additive particle auch concerning this requirement: First, that it requires a focus alternative to the whole proposition to be salient, and second, that it merely requires an alternative to the focused constituent (e.g., an individual) to be salient. We conducted two experiments involving felicity judgments as well as questions asking for the truth of the presupposition to be accommodated. Our results suggest that the latter account is too weak: mere previous mention of a potential alternative to the focused constituent is not enough to license the use of auch. However, our results also suggest that the former account is too strong: when an alternative of the focused constituent is prementioned and certain other accommodation-enhancing factors are present, the context does not have to entail the presupposed proposition. We tested the following two potentially accommodation-enhancing factors: First, whether the discourse can be construed to be from the perspective of the individual that the presupposition is about, and second, whether the presupposition is needed to establish coherence between the host sentence of the additive particle and the preceding context. The factor coherence was found to play a significant role. Our results thus corroborate the results of other researchers showing that discourse participants go to great lengths in order to identify a potential presupposition to accommodate, and we contribute to these results by showing that coherence is one of the factors that enhance accommodation.
Preface
(2019)
The current thesis examined how second language (L2) speakers of German predict upcoming input during language processing. Early research has shown that the predictive abilities of L2 speakers relative to L1 speakers are limited, resulting in the proposal of the Reduced Ability to Generate Expectations (RAGE) hypothesis. Considering that prediction is assumed to facilitate language processing in L1 speakers and probably plays a role in language learning, the assumption that L1/L2 differences can be explained in terms of different processing mechanisms is a particularly interesting approach. However, results from more recent studies on the predictive processing abilities of L2 speakers have indicated that the claim of the RAGE hypothesis is too broad and that prediction in L2 speakers could be selectively limited. In the current thesis, the RAGE hypothesis was systematically put to the test.
In this thesis, German L1 and highly proficient late L2 learners of German with Russian as L1 were tested on their predictive use of one or more information sources that exist as cues to sentence interpretation in both languages, to test for selective limits. The results showed that, in line with previous findings, L2 speakers can use the lexical-semantics of verbs to predict the upcoming noun. Here the level of prediction was more systematically controlled for than in previous studies by using verbs that restrict the selection of upcoming nouns to the semantic category animate or inanimate. Hence, prediction in L2 processing is possible. At the same time, this experiment showed that the L2 group was slower/less certain than the L1 group. Unlike previous studies, the experiment on case marking demonstrated that L2 speakers can use this morphosyntactic cue for prediction. Here, the use of case marking was tested by manipulating the word order (Dat > Acc vs. Acc > Dat) in double object constructions after a ditransitive verb. Both the L1 and the L2 group showed a difference between the two word order conditions that emerged within the critical time window for an anticipatory effect, indicating their sensitivity towards case. However, the results for the post-critical time window pointed to a higher uncertainty in the L2 group, who needed more time to integrate incoming information and were more affected by the word order variation than the L1 group, indicating that they relied more on surface-level information. A different cue weighting was also found in the experiment testing whether participants predict upcoming reference based on implicit causality information. Here, an additional child L1 group was tested, who had a lower memory capacity than the adult L2 group, as confirmed by a digit span task conducted with both learner groups. Whereas the children were only slightly delayed compared to the adult L1 group and showed the same effect of condition, the L2 speakers showed an over-reliance on surface-level information (first-mention/subjecthood). Hence, the pattern observed resulted more likely from L1/L2 differences than from resource deficits.
The reviewed studies and the experiments conducted show that L2 prediction is affected by a range of factors. While some of the factors can be attributed to more individual differences (e.g., language similarity, slower processing) and can be interpreted by L2 processing accounts assuming that L1 and L2 processing are basically the same, certain limits are better explained by accounts that assume more substantial L1/L2 differences. Crucially, the experimental results demonstrate that the RAGE hypothesis should be refined: Although prediction as a fast-operating mechanism is likely to be affected in L2 speakers, there is no indication that prediction is the dominant source of L1/L2 differences. The results rather demonstrate that L2 speakers show a different weighting of cues and rely more on semantic and surface-level information to predict as well as to integrate incoming information.
The knowledge of transformation pathways and identification of transformation products (TPs) of veterinary drugs is important for animal health, food, and environmental matters. The active agent Monensin (MON) belongs to the ionophore antibiotics and is widely used as a veterinary drug against coccidiosis in broiler farming. However, no electrochemically (EC) generated TPs of MON have been described so far. In this study, the online coupling of EC and mass spectrometry (MS) was used for the generation of oxidative TPs. EC-conditions were optimized with respect to working electrode material, solvent, modifier, and potential polarity. Subsequent LC/HRMS (liquid+ chromatography/high resolution mass spectrometry) and MS/MS experiments were performed to identify the structures of derived TPs by a suspected target analysis. The obtained EC-results were compared to TPs observed in metabolism tests with microsomes and hydrolysis experiments of MON. Five previously undescribed TPs of MON were identified in our EC/MS based study and one TP, which was already known from literature and found by a microsomal assay, could be confirmed. Two and three further TPs were found as products in microsomal tests and following hydrolysis, respectively. We found decarboxylation, O-demethylation and acid-catalyzed ring-opening reactions to be the major mechanisms of MON transformation
Back pain is a problem in adolescent athletes affecting postural control which is an important requirement for physical and daily activities whether under static or dynamic conditions. One leg stance and star excursion balance postural control tests are effective in measuring static and dynamic postural control respectively. These tests have been used in individuals with back pain, athletes and non-athletes without first establishing their reliabilities. In addition to this, there is no published literature investigating dynamic posture in adolescent athletes with back pain using the star excursion balance test. Therefore, the aim of the thesis was to assess deficit in postural control in adolescent athletes with and without back pain using static (one leg stance test) and dynamic postural (SEBT) control tests.
Adolescent athletes with and without back pain participated in the study. Static and dynamic postural control tests were performed using one leg stance and SEBT respectively. The reproducibility of both tests was established. Afterwards, it was determined whether there was an association between static and dynamic posture using the measure of displacement of the centre pressure and reach distance respectively. Finally, it was investigated whether there was a difference in postural control in adolescent athletes with and without back pain using the one leg stance test and the SEBT.
Fair to excellent reliabilities was recorded for the static (one leg stance) and dynamic (star excursion balance) postural control tests in the subjects of interest. No association was found between variables of the static and dynamic tests for the adolescent athletes with and without back pain. Also, no statistically significant difference was obtained between adolescent athletics with and without back pain using the static and dynamic postural control test.
One leg stance test and SEBT can be used as measures of postural control in adolescent athletes with and without back pain. Although static and dynamic postural control might be related, adolescent athletes with and without back pain might be using different mechanisms in controlling their static and dynamic posture. Consequently, static and dynamic postural control in adolescent athletes with back pain was not different from those without back pain. These outcome measures might not be challenging enough to detect deficit in postural control in our study group of interest.
Portal Wissen = Data
(2019)
Data assimilation? Stop! Don’t be afraid, please, come closer! No tongue twister, no rocket science. Or is it? Let’s see. It is a matter of fact, however, that data assimilation has been around for a long time and (almost) everywhere. But only in the age of supercomputers has it assumed amazing proportions.
Everyone knows data. Assimilation, however, is a difficult term for something that happens around us all the time: adaptation. Nature in particular has demonstrated to us for millions of years how evolutionary adaptation works. From unicellular organisms to primates, from algae to sequoias, from dinosaurs ... Anyone who cannot adapt will quickly not fit in anymore.
We of course have also learned to adapt in new situations and act accordingly. When we want to cross the street, we have a plan of how to do this: go to the curb, look left and right, and only cross the street if there’s no car (coming). If we do all this and adapt our plan to the traffic we see, we will not just safely cross the street, but we will also have successfully practiced data assimilation.
Of course, that sounds different when researchers try to explain how data assimilation helps them. Meteorologists, for example, have been working with data assimilation for years. The German Weather Service writes, “In numerical weather prediction, data assimilation is the approximation of a model run to the actual development of the atmosphere as described by existing observations.” What it means is that a weather forecast is only accurate if the model which is used for its calculation is repeatedly updated, i.e. assimilated, with new measurement data.
In 2017 an entire Collaborative Research Center was established at the University of Potsdam, CRC 1294, to deal with the mathematical basics of data assimilation. For Portal Wissen, we asked the mathematicians and speakers of the CRC Prof. Sebastian Reich and Prof. Wilhelm Huisinga how exactly data assimilation works and in which areas of research they can be used profitably in the future. We have looked at two projects at the CRC itself: the analysis of eye movements and the research on space weather.
In addition, the current issue is full of research projects that revolve around data in very different ways. Atmospheric physicist Markus Rex throws a glance at the spectacular MOSAiC expedition. Starting in September 2019, the German research icebreaker “Polarstern” will drift through the Arctic Ocean for a year and collect numerous data on ice, ocean, biosphere, and atmosphere. In the project “TraceAge”, nutritionists will use the data from thousands of subjects who participated in a long-term study to find out more about the function of trace elements in our body. Computer scientists have developed a method to filter relevant information from the flood of data on the worldwide web so as to enable visually impaired to surf the Internet more easily. And a geophysicist is working on developing an early warning system for volcanic eruptions from seemingly inconspicuous seismic data.
Not least, this issue deals with the fascination of fire and ice, the possibilities that digitization offers for administration, and the question of how to inspire children for sports and exercise. We hope you enjoy reading – and if you send us some of your reading experience, we will assimilate it into our next issue. Promised!
For a long time, there were things on this planet that only humans could do, but this time might be coming to an end. By using the universal tool that makes us unique – our intelligence – we have worked to eliminate our uniqueness, at least when it comes to solving cognitive tasks. Artificial intelligence is now able to play chess, understand language, and drive a car – and often better than we.
How did we get here? The philosopher Aristotle formulated the first “laws of thought” in his syllogisms, and the mathematicians Blaise Pascal and Wilhelm Leibniz built some of the earliest calculating machines. The mathematician George Boole was the first to introduce a formal language to represent logic. The natural scientist Alan Turing created his deciphering machine “Colossus,” the first programmable computer. Philosophers, mathematicians, psychologists, and linguists – for centuries, scientists have been developing formulas, machines, and theories that were supposed to enable us to reproduce and possibly even enhance our most valuable ability.
But what exactly is “artificial intelligence”? Even the name calls for comparison. Is artificial intelligence like human intelligence? Alan Turing came up with a test in 1950 to provide a satisfying operational definition of intelligence: According to him, a machine is intelligent if its thinking abilities equal those of humans. It has to reach human levels for any cognitive task. The machine has to prove this by convincing a human interrogator that it is human. Not an easy task: After all, it has to process natural language, store knowledge, draw conclusions, and learn something new. In fact, over the past ten years, a number of AI systems have emerged that have passed the test one way or another in chat conversations with automatically generated texts or images. Nowadays, the discussion usually centers on other questions: Does AI still need its creators? Will it not only outperform humans but someday replace them – be it in the world of work or even beyond? Will AI solve our problems in the age of all-encompassing digital networking – or will it become a part of the problem?
Artificial intelligence, its nature, its limitations, its potential, and its relationship to humans were being discussed even before it existed. Literature and film have created scenarios with very different endings. But what is the view of the scientists who are actually researching with or about artificial intelligence? For the current issue of our research magazine, a cognitive scientist, an education researcher, and a computer scientist shared their views. We also searched the University for projects whose professional environment reveals the numerous opportunities that AI offers for various disciplines. We cover the geosciences and computer science as well as economics, health, and literature studies.
At the same time, we have not lost sight of the broad research spectrum at the University: a legal expert introduces us to the not-so-distant sphere of space law while astrophysicists work on ensuring that state-of-the-art telescopes observe those regions in space where something “is happening” at the right time. A chemist explains why the battery of the future will come from a printer, and molecular biologists explain how they will breed stress-resistant plants. You will read about all this in this issue as well as about current studies on restless legs syndrome in children and the situation of Muslims in Brandenburg. Last but not least, we will introduce you to the sheep currently grazing in Sanssouci Park – all on behalf of science. Quite clever!
Enjoy your read!
THE EDITORS
The worldwide populist wave has contributed to a perception that international law is currently in a state of crisis. This article examines in how far populist governments have challenged prevailing interpretations of international law. The article links structural features of populism with an analysis of populist governmental strategies and argumentative practices. It demonstrates that, in their rhetoric, populist governments promote an understanding of international law as a mere law of coordination. This is, however, not entirely reflected in their legal practices where an instrumental, cherry-picking approach prevails. The article concludes that policies of populist governments affect the current state of international law on two different levels: In the political sphere their practices alter the general environment in which legal rules are interpreted. In the legal sphere populist governments push for changes in the interpretation of established international legal rules. The article substantiates these propositions by focusing on the principle of nonintervention and foreign funding for NGOs.
We use panel data from Germany to analyze the effect of population density on urban air pollution (nitrogen oxides, particulate matter and ozone). To address unobserved heterogeneity and omitted variables, we present long difference/fixed effects estimates and instrumental variables estimates, using historical population and soil quality as instruments. Our preferred estimates imply that a one-standard deviation increase in population density increases air pollution by 3-12%.
Due to its bioavailability and (bio)degradability, poly(lactide) (PLA) is an interesting polymer that is already being used as packaging material, surgical seam, and drug delivery system. Dependent on various parameters such as polymer composition, amphiphilicity, sample preparation, and the enantiomeric purity of lactide, PLA in an amphiphilic block copolymer can affect the self-assembly behavior dramatically. However, sizes and shapes of aggregates have a critical effect on the interactions between biological and drug delivery systems, where the general understanding of these polymers and their ability to influence self-assembly is of significant interest in science.
The first part of this thesis describes the synthesis and study of a series of linear poly(L-lactide) (PLLA) and poly(D-lactide) (PDLA)-based amphiphilic block copolymers with varying PLA (hydrophobic), and poly(ethylene glycol) (PEG) (hydrophilic) chain lengths and different block copolymer sequences (PEG-PLA and PLA-PEG). The PEG-PLA block copolymers were synthesized by ring-opening polymerization of lactide initiated by a PEG-OH macroinitiator. In contrast, the PLA-PEG block copolymers were produced by a Steglich-esterification of modified PLA with PEG-OH.
The aqueous self-assembly at room temperature of the enantiomerically pure PLLA-based block copolymers and their stereocomplexed mixtures was investigated by dynamic light scattering (DLS), transmission electron microscopy (TEM), wide-angle X-ray diffraction (WAXD), and differential scanning calorimetry (DSC). Spherical micelles and worm-like structures were produced, whereby the obtained self-assembled morphologies were affected by the lactide weight fraction in the block copolymer and self-assembly time. The formation of worm-like structures increases with decreasing PLA-chain length and arises from spherical micelles, which become colloidally unstable and undergo an epitaxial fusion with other micelles. As shown by DSC experiments, the crystallinity of the corresponding PLA blocks increases within the self-assembly time. However, the stereocomplexed self-assembled structures behave differently from the parent polymers and result in irregular-shaped clusters of spherical micelles. Additionally, time-dependent self-assembly experiments showed a transformation, from already self-assembled morphologies of different shapes to more compact micelles upon stereocomplexation.
In the second part of this thesis, with the objective to influence the self-assembly of PLA-based block copolymers and its stereocomplexes, poly(methyl phosphonate) (PMeP) and poly(isopropyl phosphonate) (PiPrP) were produced by ring-opening polymerization to implement an alternative to the hydrophilic block PEG. Although, the 1,8 diazabicyclo[5.4.0]unde 7 ene (DBU) or 1,5,7 triazabicyclo[4.4.0]dec-5-ene (TBD) mediated synthesis of the corresponding poly(alkyl phosphonate)s was successful, however, not so the polymerization of copolymers with PLA-based precursors (PLA-homo polymers, and PEG-PLA block copolymers). Transesterification, obtained by 1H-NMR spectroscopy, between the poly(phosphonate)- and PLA block caused a high-field shifted peak split of the methine proton in the PLA polymer chain, with split intensities depending on the used catalyst (DBU for PMeP, and TBD for PiPrP polymerization). An additional prepared block copolymer PiPrP-PLLA that wasn’t affected in its polymer sequence was finally used for self-assembly experiments with PLA-PEG and PEG-PLA mixing.
This work provides a comprehensive study of the self-assembly behavior of PLA-based block copolymers influenced by various parameters such as polymer block lengths, self-assembly time, and stereocomplexation of block copolymer mixtures.
This dissertation is concerned with the relation between qualitative phonological organization in the form of syllabic structure and continuous phonetics, that is, the spatial and temporal dimensions of vocal tract action that express syllabic structure. The main claim of the dissertation is twofold. First, we argue that syllabic organization exerts multiple effects on the spatio-temporal properties of the segments that partake in that organization. That is, there is no unique or privileged exponent of syllabic organization. Rather, syllabic organization is expressed in a pleiotropy of phonetic indices. Second, we claim that a better understanding of the relation between qualitative phonological organization and continuous phonetics is reached when one considers how the string of segments (over which the nature of the phonological organization is assessed) responds to perturbations (scaling of phonetic variables) of localized properties (such as durations) within that string. Specifically, variation in phonetic variables and more specifically prosodic variation is a crucial key to understanding the nature of the link between (phonological) syllabic organization and the phonetic spatio-temporal manifestation of that organization. The effects of prosodic variation on segmental properties and on the overlap between the segments, we argue, offer the right pathway to discover patterns related to syllabic organization. In our approach, to uncover evidence for global organization, the sequence of segments partaking in that organization as well as properties of these segments or their relations with one another must be somehow locally varied. The consequences of such variation on the rest of the sequence can then be used to unveil the span of organization. When local perturbations to segments or relations between adjacent segments have effects that ripple through the rest of the sequence, this is evidence that organization is global. If instead local perturbations stay local with no consequences for the rest of the whole, this indicates that organization is local.
PLATON
(2019)
Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection.
Plasmonic metal nanostructures can be tuned to efficiently interact with light, converting the photons into energetic charge carriers and heat. Therefore, the plasmonic nanoparticles such as gold and silver nanoparticles act as nano-reactors, where the molecules attached to their surfaces benefit from the enhanced electromagnetic field along with the generated energetic charge carriers and heat for possible chemical transformations. Hence, plasmonic chemistry presents metal nanoparticles as a unique playground for chemical reactions on the nanoscale remotely controlled by light. However, defining the elementary concepts behind these reactions represents the main challenge for understanding their mechanism in the context of the plasmonically assisted chemistry.
Surface-enhanced Raman scattering (SERS) is a powerful technique employing the plasmon-enhanced electromagnetic field, which can be used for probing the vibrational modes of molecules adsorbed on plasmonic nanoparticles. In this cumulative dissertation, I use SERS to probe the dimerization reaction of 4-nitrothiophenol (4-NTP) as a model example of plasmonic chemistry. I first demonstrate that plasmonic nanostructures such as gold nanotriangles and nanoflowers have a high SERS efficiency, as evidenced by probing the vibrations of the rhodamine dye R6G and the 4-nitrothiophenol 4-NTP. The high signal enhancement enabled the measurements of SERS spectra with a short acquisition time, which allows monitoring the kinetics of chemical reactions in real time.
To get insight into the reaction mechanism, several time-dependent SERS measurements of the 4-NTP have been performed under different laser and temperature conditions. Analysis of the results within a mechanistic framework has shown that the plasmonic heating significantly enhances the reaction rate, while the reaction is probably initiated by the energetic electrons. The reaction was shown to be intensity-dependent, where a certain light intensity is required to drive the reaction. Finally, first attempts to scale up the plasmonic catalysis have been performed showing the necessity to achieve the reaction threshold intensity. Meanwhile, the induced heat needs to quickly dissipate from the reaction substrate, since otherwise the reactants and the reaction platform melt. This study might open the way for further work seeking the possibilities to quickly dissipate the plasmonic heat generated during the reaction and therefore, scaling up the plasmonic catalysis.
Transcending the conventional debate around efficiency in sustainable consumption, anti-consumption patterns leading to decreased levels of material consumption have been gaining importance. Change agents are crucial for the promotion of such patterns, so there may be lessons for governance interventions that can be learnt from the every-day experiences of those who actively implement and promote sustainability in the field of anti-consumption. Eighteen social innovation pioneers, who engage in and diffuse practices of voluntary simplicity and collaborative consumption as sustainable options of anti-consumption share their knowledge and personal insights in expert interviews for this research. Our qualitative content analysis reveals drivers, barriers, and governance strategies to strengthen anti-consumption patterns, which are negotiated between the market, the state, and civil society. Recommendations derived from the interviews concern entrepreneurship, municipal infrastructures in support of local grassroots projects, regulative policy measures, more positive communication to strengthen the visibility of initiatives and emphasize individual benefits, establishing a sense of community, anti-consumer activism, and education. We argue for complementary action between top-down strategies, bottom-up initiatives, corporate activities, and consumer behavior. The results are valuable to researchers, activists, marketers, and policymakers who seek to enhance their understanding of materially reduced consumption patterns based on the real-life experiences of active pioneers in the field.
Sinkholes and depressions are typical landforms of karst regions. They pose a considerable natural hazard to infrastructure, agriculture, economy and human life in affected areas worldwide. The physio-chemical processes of sinkholes and depression formation are manifold, ranging from dissolution and material erosion in the subsurface to mechanical subsidence/failure of the overburden. This thesis addresses the mechanisms leading to the development of sinkholes and depressions by using complementary methods: remote sensing, distinct element modelling and near-surface geophysics.
In the first part, detailed information about the (hydro)-geological background, ground structures, morphologies and spatio-temporal development of sinkholes and depressions at a very active karst area at the Dead Sea are derived from satellite image analysis, photogrammetry and geologic field surveys. There, clusters of an increasing number of sinkholes have been developing since the 1980s within large-scale depressions and are distributed over different kinds of surface materials: clayey mud, sandy-gravel alluvium and lacustrine evaporites (salt). The morphology of sinkholes differs depending in which material they form: Sinkholes in sandy-gravel alluvium and salt are generally deeper and narrower than sinkholes in the interbedded evaporite and mud deposits. From repeated aerial surveys, collapse precursory features like small-scale subsidence, individual holes and cracks are identified in all materials. The analysis sheds light on the ongoing hazardous subsidence process, which is driven by the base-level fall of the Dead Sea and by the dynamic formation of subsurface water channels.
In the second part of this thesis, a novel, 2D distinct element geomechanical modelling approach with the software PFC2D-V5 to simulating individual and multiple cavity growth and sinkhole and large-scale depression development is presented. The approach involves a stepwise material removal technique in void spaces of arbitrarily shaped geometries and is benchmarked by analytical and boundary element method solutions for circular cavities. Simulated compression and tension tests are used to calibrate model parameters with bulk rock properties for the materials of the field site. The simulations show that cavity and sinkhole evolution is controlled by material strength of both overburden and cavity host material, the depth and relative speed of the cavity growth and the developed stress pattern in the subsurface. Major findings are: (1) A progressively deepening differential subrosion with variable growth speed yields a more fragmented stress pattern with stress interaction between the cavities. It favours multiple sinkhole collapses and nesting within large-scale depressions. (2) Low-strength materials do not support large cavities in the material removal zone, and subsidence is mainly characterised by gradual sagging into the material removal zone with synclinal bending. (3) High-strength materials support large cavity formation, leading to sinkhole formation by sudden collapse of the overburden. (4) Large-scale depression formation happens either by coalescence of collapsing holes, block-wise brittle failure, or gradual sagging and lateral widening.
The distinct element based approach is compared to results from remote sensing and geophysics at the field site. The numerical simulation outcomes are generally in good agreement with derived morphometrics, documented surface and subsurface structures as well as seismic velocities. Complementary findings on the subrosion process are provided from electric and seismic measurements in the area.
Based on the novel combination of methods in this thesis, a generic model of karst landform evolution with focus on sinkhole and depression formation is developed. A deepening subrosion system related to preferential flow paths evolves and creates void spaces and subsurface conduits. This subsequently leads to hazardous subsidence, and the formation of sinkholes within large-scale depressions. Finally, a monitoring system for shallow natural hazard phenomena consisting of geodetic and geophysical observations is proposed for similarly affected areas.
We measure valence-to-core x-ray emission spectra of compressed crystalline GeO₂ up to 56 GPa and of amorphous GeO₂ up to 100 GPa. In a novel approach, we extract the Ge coordination number and mean Ge-O distances from the emission energy and the intensity of the Kβ'' emission line. The spectra of high-pressure polymorphs are calculated using the Bethe-Salpeter equation. Trends observed in the experimental and calculated spectra are found to match only when utilizing an octahedral model. The results reveal persistent octahedral Ge coordination with increasing distortion, similar to the compaction mechanism in the sequence of octahedrally coordinated crystalline GeO₂ high-pressure polymorphs.
Permafrost warming has the potential to amplify global climate change, because when frozen sediments thaw it unlocks soil organic carbon. Yet to date, no globally consistent assessment of permafrost temperature change has been compiled. Here we use a global data set of permafrost temperature time series from the Global Terrestrial Network for Permafrost to evaluate temperature change across permafrost regions for the period since the International Polar Year (2007–2009). During the reference decade between 2007 and 2016, ground temperature near the depth of zero annual amplitude in the continuous permafrost zone increased by 0.39 ± 0.15 °C. Over the same period, discontinuous permafrost warmed by 0.20 ± 0.10 °C. Permafrost in mountains warmed by 0.19 ± 0.05 °C and in Antarctica by 0.37 ± 0.10 °C. Globally, permafrost temperature increased by 0.29 ± 0.12 °C. The observed trend follows the Arctic amplification of air temperature increase in the Northern Hemisphere. In the discontinuous zone, however, ground warming occurred due to increased snow thickness while air temperature remained statistically unchanged.
Peer cultural socialisation
(2019)
This study investigated how peers can contribute to cultural minority students’ cultural identity, life satisfaction, and school values (school importance, utility, and intrinsic values) by talking about cultural values, beliefs, and behaviours associated with heritage and mainstream culture (peer cultural socialisation). We further distinguished between heritage and mainstream identity as two separate dimensions of cultural identity. Analyses were based on self-reports of 662 students of the first, second, and third migrant generation in Germany (Mean age = 14.75 years, 51% female). Path analyses revealed that talking about heritage culture with friends was positively related to heritage identity. Talking about mainstream culture with friends was negatively associated with heritage identity, but positively with mainstream identity as well as school values. Both dimensions of cultural identity related to higher life satisfaction and more positive school values. As expected, heritage and mainstream identity mediated the link between peer cultural socialisation and adjustment outcomes. Findings highlight the potential of peers as socialisation agents to help promote cultural belonging as well as positive adjustment of cultural minority youth in the school context.
Pedagogy of integrity
(2019)
The master thesis “Pedagogy of Integrity: an Analysis of the Conceptualization and Implementation of the MA Program Anglophone Modernities in Literature and Culture” deals with colonial patterns in higher education practices. It provides a theoretical framework for decolonization of academic teaching-learning practices on the micro- and meso-didactic levels and suggests concrete solutions for the decolonized education practices, especially for degree programs, which content focuses on post-colonial issues. Besides, through the exemplary analysis of the conceptualization and implementation of the MA Program Anglophone Modernities in Literature and Culture the work explores patterns of colonial heritage as well as will to decolonise these. The main thesis claims that (higher) education should be liberated from colonial patterns, so that real participation for all students in the collective knowledge production becomes possible.
In the theoretical elaborations different concepts of critical and radical pedagogy, e.g. the ones of Paulo Freire and bell hooks, in combination with concepts about modalities of adult learning (e.g. transformative learning) and approaches to education, seeking to combine learning and social justice (e.g. Social Justice Learning) are systematised and explored for their substance and potential to contribute to a criteria catalogue for decolonised educational practises. Besides, attention is paid on higher education research results, which reveal, that students, who belong to underrepresented groups at university (non-traditional students) in their societies of origin, face more difficulties and discrimination as international students at Western universities, than ‘traditional’ international students do. Based on the theoretical elaborations, the work claims that:
(1) the homogeneity-preserving dynamics, found in Western colleges, are an inheritance of colonial time and mindsets, which continue to function in education and multiply social inequality in the context of internationalization, migration, and participation;
(2) all, but especially those higher educational programs, dealing explicitly with inequality phenomena, social and cultural diversity, power relations and issues of domination, as well as with postcolonial criticism, should establish premises of equity and provide de-facto equal opportunities for participation through embodiment of social justice as a way to remain credible;
(3) decolonization of the educational space can be enabled through appropriate didactic action both on the meso- (institution) and micro-didactical (teaching-learning arrangements) agency levels with sufficient will and willingness of responsible professionals at.
By examining representative documents, published by the MA Program Anglophone Modernities in Literature and Culture, using the 'close reading' methodology, as well as through the exemplary analysis of the concept of a teaching-learning program’s event and a student survey, the work seeks to examine wo what extent the Master's Degree Program represents a space of decolonised higher education. The results of the analysis indicate the need for stronger normative value-positioning of the Study program, while many practices that show commitment to participation, social justice and diversity, have been identified.
In the last chapter, the results of the theoretical elaboration and the program’s analysis are synthesized in the concept of an integrity-based pedagogy conceptualisation, called Pedagogy of Integrity, and suggestions are formulated for the teaching practice in the study program, which are meant to help overcome the discrepancy between will and practice towards decolonised educational space.
In animals and humans, behavior can be influenced by irrelevant stimuli, a phenomenon called Pavlovian-to-instrumental transfer (PIT). In subjects with substance use disorder, PIT is even enhanced with functional activation in the nucleus accumbens (NAcc) and amygdala. While we observed enhanced behavioral and neural PIT effects in alcohol-dependent subjects, we here aimed to determine whether behavioral PIT is enhanced in young men with high-risk compared to low-risk drinking and subsequently related functional activation in an a-priori region of interest encompassing the NAcc and amygdala and related to polygenic risk for alcohol consumption. A representative sample of 18-year old men (n = 1937) was contacted: 445 were screened, 209 assessed: resulting in 191 valid behavioral, 139 imaging and 157 genetic datasets. None of the subjects fulfilled criteria for alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders-IV-TextRevision (DSM-IV-TR). We measured how instrumental responding for rewards was influenced by background Pavlovian conditioned stimuli predicting action-independent rewards and losses. Behavioral PIT was enhanced in high-compared to low-risk drinkers (b = 0.09, SE = 0.03, z = 2.7, p < 0.009). Across all subjects, we observed PIT-related neural blood oxygen level-dependent (BOLD) signal in the right amygdala (t = 3.25, p(SVC) = 0.04, x = 26, y = -6, z = -12), but not in NAcc. The strength of the behavioral PIT effect was positively correlated with polygenic risk for alcohol consumption (r(s) = 0.17, p = 0.032). We conclude that behavioral PIT and polygenic risk for alcohol consumption might be a biomarker for a subclinical phenotype of risky alcohol consumption, even if no drug-related stimulus is present. The association between behavioral PIT effects and the amygdala might point to habitual processes related to out PIT task. In non-dependent young social drinkers, the amygdala rather than the NAcc is activated during PIT; possible different involvement in association with disease trajectory should be investigated in future studies.
Objective: We aimed to characterize patients after an acute cardiac event regarding their negative expectations around returning to work and the impact on work capacity upon discharge from cardiac rehabilitation (CR).
Methods: We analyzed routine data of 884 patients (52±7 years, 76% men) who attended 3 weeks of inpatient CR after an acute coronary syndrome (ACS) or cardiac surgery between October 2013 and March 2015. The primary outcome was their status determining their capacity to work (fit vs unfit) at discharge from CR. Further, sociodemographic data (eg, age, sex, and education level), diagnoses, functional data (eg, exercise stress test and 6-min walking test [6MWT]), the Hospital Anxiety and Depression Scale (HADS) and self-assessment of the occupational prognosis (negative expectations and/or unemployment, Würzburger screening) at admission to CR were considered.
Results: A negative occupational prognosis was detected in 384 patients (43%). Out of these, 368 (96%) expected not to return to work after CR and/or were unemployed before CR at 29% (n=113). Affected patients showed a reduced exercise capacity (bicycle stress test: 100 W vs 118 W, P<0.01; 6MWT: 380 m vs 421 m, P<0.01) and were more likely to receive a depression diagnosis (12% vs 3%, P<0.01), as well as higher levels on the HADS. At discharge from CR, 21% of this group (n=81) were fit for work (vs 35% of patients with a normal occupational prognosis (n=175, P<0.01)). Sick leave before the cardiac event (OR 0.4, 95% CI 0.2–0.6, P<0.01), negative occupational expectations (OR 0.4, 95% CI 0.3–0.7, P<0.01) and depression (OR 0.3, 95% CI 0.1–0.8, P=0.01) reduced the likelihood of achieving work capacity upon discharge. In contrast, higher exercise capacity was positively associated.
Conclusion: Patients with a negative occupational prognosis often revealed a reduced physical performance and suffered from a high psychosocial burden. In addition, patients’ occupational expectations were a predictor of work capacity at discharge from CR. Affected patients should be identified at admission to allow for targeted psychosocial care.
Background
Postoperative delirium is a common disorder in older adults that is associated with higher morbidity and mortality, prolonged cognitive impairment, development of dementia, higher institutionalization rates, and rising healthcare costs. The probability of delirium after surgery increases with patients’ age, with pre-existing cognitive impairment, and with comorbidities, and its diagnosis and treatment is dependent on the knowledge of diagnostic criteria, risk factors, and treatment options of the medical staff. In this study, we will investigate whether a cross-sectoral and multimodal intervention for preventing delirium can reduce the prevalence of delirium and postoperative cognitive decline (POCD) in patients older than 70 years undergoing elective surgery. Additionally, we will analyze whether the intervention is cost-effective.
Methods
The study will be conducted at five medical centers (with two or three surgical departments each) in the southwest of Germany. The study employs a stepped-wedge design with cluster randomization of the medical centers. Measurements are performed at six consecutive points: preadmission, preoperative, and postoperative with daily delirium screening up to day 7 and POCD evaluations at 2, 6, and 12 months after surgery. Recruitment goals are to enroll 1500 patients older than 70 years undergoing elective operative procedures (cardiac, thoracic, vascular, proximal big joints and spine, genitourinary, gastrointestinal, and general elective surgery procedures.
Discussion
Results of the trial should form the basis of future standards for preventing delirium and POCD in surgical wards. Key aims are the improvement of patient safety and quality of life, as well as the reduction of the long-term risk of conversion to dementia. Furthermore, from an economic perspective, we expect benefits and decreased costs for hospitals, patients, and healthcare insurances.
Trial registration
German Clinical Trials Register, DRKS00013311. Registered on 10 November 2017.
Introduction: Many semiarid regions around the world are presently experiencing significant changes in both climatic conditions and vegetation. This includes a disturbed coexistence between grasses and bushes also known as bush encroachment, and altered precipitation patterns with larger rain events. Fewer, more intense precipitation events might promote groundwater recharge, but depending on the structure of the vegetation also encourage further woody encroachment.
Materials and Methods: In this study, we investigated how patterns and sources of water uptake of Acacia mellifera (blackthorn), an important encroaching woody plant in southern African savannas, are associated with the intensity of rain events and the size of individual shrubs. The study was conducted at a commercial cattle farm in the semiarid Kalahari in Namibia (MAP 250 mm/a). We used soil moisture dynamics in different depths and natural stable isotopes as markers of water sources. Xylem water of fifteen differently sized individuals during eight rain events was extracted using a Scholander pressure bomb.
Results and Discussion: Results suggest the main rooting activity zone of A. mellifera in 50 and 75 cm soil depth but a reasonable water uptake from 10 and 25 cm. Any apparent uptake pattern seems to be driven by water availability, not time in the season. Bushes prefer the deeper soil layers after heavier rain events, indicating some evidence for the classical Walter’s two-layer hypothesis. However, rain events up to a threshold of 6 mm/day cause shallower depths of use and suggest several phases of intense competition with perennial grasses. The temporal uptake pattern does not depend on shrub size, suggesting a fast upwards water flow inside. d2H and d18O values in xylem water indicate that larger shrubs rely less on upper and very deep soil water than smaller shrubs. It supports the hypothesis that in environments where soil moisture is highly variable in the upper soil layers, the early investment in a deep tap-root to exploit deeper, more reliable water sources could reduce the probability of mortality during the establishment phase. Nevertheless, independent of size and time in the season, bushes do not compete with potential groundwater recharge. In a savanna encroached by A. mellifera, groundwater will most likely be affected indirectly.
PaRDeS, the journal of the German Association for Jewish Studies, aims at exploring the fruitful and multifarious cultures of Judaism as well as their relations to their environment within diverse areas of research. In addition, the journal promotes Jewish Studies within academic discourse and reflects on its historic and social responsibilities.
Speech and action sequences are continuous streams of information that can be segmented into sub-units. In both domains, this segmentation can be facilitated by perceptual cues contained within the information stream. In speech, prosodic cues (e.g., a pause, pre-boundary lengthening, and pitch rise) mark boundaries between words and phrases, while boundaries between actions of an action sequence can be marked by kinematic cues (e.g., a pause, pre-boundary deceleration). The processing of prosodic boundary cues evokes an Event-related Potentials (ERP) component known as the Closure Positive Shift (CPS), and it is possible that the CPS reflects domaingeneral cognitive processes involved in segmentation, given that the CPS is also evoked by boundaries between subunits of non-speech auditory stimuli. This study further probed the domain-generality of the CPS and its underlying processes by investigating electrophysiological correlates of the processing of boundary cues in sequences of spoken verbs (auditory stimuli; Experiment 1; N = 23 adults) and actions (visual stimuli; Experiment 2; N = 23 adults). The EEG data from both experiments revealed a CPS-like broadly distributed positivity during the 250 ms prior to the onset of the post-boundary word or action, indicating similar electrophysiological correlates of boundary processing across domains, suggesting that the cognitive processes underlying speech and action segmentation might also be shared.
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
The Himalayas are a region that is most dependent, but also frequently prone to hazards from changing meltwater resources. This mountain belt hosts the highest mountain peaks on earth, has the largest reserve of ice outside the polar regions, and is home to a rapidly growing population in recent decades. One source of hazard has attracted scientific research in particular in the past two decades: glacial lake outburst floods (GLOFs) occurred rarely, but mostly with fatal and catastrophic consequences for downstream communities and infrastructure. Such GLOFs can suddenly release several million cubic meters of water from naturally impounded meltwater lakes. Glacial lakes have grown in number and size by ongoing glacial mass losses in the Himalayas. Theory holds that enhanced meltwater production may increase GLOF frequency, but has never been tested so far. The key challenge to test this notion are the high altitudes of >4000 m, at which lakes occur, making field work impractical. Moreover, flood waves can attenuate rapidly in mountain channels downstream, so that many GLOFs have likely gone unnoticed in past decades. Our knowledge on GLOFs is hence likely biased towards larger, destructive cases, which challenges a detailed quantification of their frequency and their response to atmospheric warming. Robustly quantifying the magnitude and frequency of GLOFs is essential for risk assessment and management along mountain rivers, not least to implement their return periods in building design codes.
Motivated by this limited knowledge of GLOF frequency and hazard, I developed an algorithm that efficiently detects GLOFs from satellite images. In essence, this algorithm classifies land cover in 30 years (~1988–2017) of continuously recorded Landsat images over the Himalayas, and calculates likelihoods for rapidly shrinking water bodies in the stack of land cover images. I visually assessed such detected tell-tale sites for sediment fans in the river channel downstream, a second key diagnostic of GLOFs. Rigorous tests and validation with known cases from roughly 10% of the Himalayas suggested that this algorithm is robust against frequent image noise, and hence capable to identify previously unknown GLOFs. Extending the search radius to the entire Himalayan mountain range revealed some 22 newly detected GLOFs. I thus more than doubled the existing GLOF count from 16 previously known cases since 1988, and found a dominant cluster of GLOFs in the Central and Eastern Himalayas (Bhutan and Eastern Nepal), compared to the rarer affected ranges in the North. Yet, the total of 38 GLOFs showed no change in the annual frequency, so that the activity of GLOFs per unit glacial lake area has decreased in the past 30 years. I discussed possible drivers for this finding, but left a further attribution to distinct GLOF-triggering mechanisms open to future research.
This updated GLOF frequency was the key input for assessing GLOF hazard for the entire Himalayan mountain belt and several subregions. I used standard definitions in flood hydrology, describing hazard as the annual exceedance probability of a given flood peak discharge [m3 s-1] or larger at the breach location. I coupled the empirical frequency of GLOFs per region to simulations of physically plausible peak discharges from all existing ~5,000 lakes in the Himalayas. Using an extreme-value model, I could hence calculate flood return periods. I found that the contemporary 100-year GLOF discharge (the flood level that is reached or exceeded on average once in 100 years) is 20,600+2,200/–2,300 m3 s-1 for the entire Himalayas. Given the spatial and temporal distribution of historic GLOFs, contemporary GLOF hazard is highest in the Eastern Himalayas, and lower for regions with rarer GLOF abundance. I also calculated GLOF hazard for some 9,500 overdeepenings, which could expose and fill with water, if all Himalayan glaciers have melted eventually. Assuming that the current GLOF rate remains unchanged, the 100-year GLOF discharge could double (41,700+5,500/–4,700 m3 s-1), while the regional GLOF hazard may increase largest in the Karakoram.
To conclude, these three stages–from GLOF detection, to analysing their frequency and estimating regional GLOF hazard–provide a framework for modern GLOF hazard assessment. Given the rapidly growing population, infrastructure, and hydropower projects in the Himalayas, this thesis assists in quantifying the purely climate-driven contribution to hazard and risk from GLOFs.
Aim: The aim of the study was to identify common orthopedic sports injury profiles in adolescent elite athletes with respect to age, sex, and anthropometrics.
Methods: A retrospective data analysis of 718 orthopedic presentations among 381 adolescent elite athletes from 16 different sports to a sports medical department was performed. Recorded data of history and clinical examination included area, cause and structure of acute and overuse injuries. Injury-events were analyzed in the whole cohort and stratified by age (11–14/15–17 years) and sex. Group differences were tested by chi-squared-tests. Logistic regression analysis was applied examining the influence of factors age, sex, and body mass index (BMI) on the outcome variables area and structure (a = 0.05).
Results: Higher proportions of injury-events were reported for females (60%) and athletes of the older age group (66%) than males and younger athletes. The most frequently injured area was the lower extremity (47%) followed by the spine (30.5%) and the upper extremity (12.5%). Acute injuries were mainly located at the lower extremity (74.5%), while overuse injuries were predominantly observed at the lower extremity (41%) as well as the spine (36.5%). Joints (34%), muscles (22%), and tendons (21.5%) were found to be the most often affected structures. The injured structures were different between the age groups (p = 0.022), with the older age group presenting three times more frequent with ligament pathology events (5.5%/2%) and less frequent with bony problems (11%/20.5%) than athletes of the younger age group. The injured area differed between the sexes (p = 0.005), with males having fewer spine injury-events (25.5%/34%) but more upper extremity injuries (18%/9%) than females. Regression analysis showed statistically significant influence for BMI (p = 0.002) and age (p = 0.015) on structure, whereas the area was significantly influenced by sex (p = 0.005).
Conclusion: Events of soft-tissue overuse injuries are the most common reasons resulting in orthopedic presentations of adolescent elite athletes. Mostly, the lower extremity and the spine are affected, while sex and age characteristics on affected area and structure must be considered. Therefore, prevention strategies addressing the injury-event profiles should already be implemented in early adolescence taking age, sex as well as injury entity into account.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
OpenForecast
(2019)
The development and deployment of new operational runoff forecasting systems are a strong focus of the scientific community due to the crucial importance of reliable and timely runoff predictions for early warnings of floods and flashfloods for local businesses and communities. OpenForecast, the first operational runoff forecasting system in Russia, open for public use, is presented in this study. We developed OpenForecast based only on open-source software and data-GR4J hydrological model, ERA-Interim meteorological reanalysis, and ICON deterministic short-range meteorological forecasts. Daily forecasts were generated for two basins in the European part of Russia. Simulation results showed a limited efficiency in reproducing the spring flood of 2019. Although the simulations managed to capture the timing of flood peaks, they failed in estimating flood volume. However, further implementation of the parsimonious data assimilation technique significantly alleviates simulation errors. The revealed limitations of the proposed operational runoff forecasting system provided a foundation to outline its further development and improvement.
On uninterpretable features
(2019)
Predation drives coexistence, evolution and population dynamics of species in food webs, and has strong impacts on related ecosystem functions (e.g. primary production). The effect of predation on these processes largely depends on the trade-offs between functional traits in the predator and prey community. Trade-offs between defence against predation and competitive ability, for example, allow for prey speciation and predator-mediated coexistence of prey species with different strategies (defended or competitive), which may stabilize the overall food web dynamics. While the importance of such trade-offs for coexistence is widely known, we lack an understanding and the empirical evidence of how the variety of differently shaped trade-offs at multiple trophic levels affect biodiversity, trait adaptation and biomass dynamics in food webs. Such mechanistic understanding is crucial for predictions and management decisions that aim to maintain biodiversity and the capability of communities to adapt to environmental change ensuring their persistence.
In this dissertation, after a general introduction to predator-prey interactions and tradeoffs, I first focus on trade-offs in the prey between qualitatively different types of defence (e.g. camouflage or escape behaviour) and their costs. I show that these different types lead to different patterns of predator-mediated coexistence and population dynamics, by using a simple predator-prey model. In a second step, I elaborate quantitative aspects of trade-offs and demonstrates that the shape of the trade-off curve in combination with trait-fitness relationships strongly affects competition among different prey types: Either specialized species with extreme trait combinations (undefended or completely defended) coexist, or a species with an intermediate defence level dominates. The developed theory on trade-off shapes and coexistence is kept general, allowing for applications apart from defence-competitiveness trade-offs. Thirdly, I tested the theory on trade-off shapes on a long-term field data set of phytoplankton from Lake Constance. The measured concave trade-off between defence and growth governs seasonal trait changes of phytoplankton in response to an altering grazing pressure by zooplankton, and affects the maintenance of trait variation in the community. In a fourth step, I analyse the interplay of different tradeoffs at multiple trophic levels with plankton data of Lake Constance and a corresponding tritrophic food web model. The results show that the trait and biomass dynamics of the different three trophic levels are interrelated in a trophic biomass-trait cascade, leading to unintuitive patterns of trait changes that are reversed in comparison to predictions from bitrophic systems. Finally, in the general discussion, I extract main ideas on trade-offs in multitrophic systems, develop a graphical theory on trade-off-based coexistence, discuss the interplay of intra- and interspecific trade-offs, and end with a management-oriented view on the results of the dissertation, describing how food webs may respond to future global changes, given their trade-offs.
Today's perovskite solar cells (PSCs) are limited mainly by their open‐circuit voltage (VOC) due to nonradiative recombination. Therefore, a comprehensive understanding of the relevant recombination pathways is needed. Here, intensity‐dependent measurements of the quasi‐Fermi level splitting (QFLS) and of the VOC on the very same devices, including pin‐type PSCs with efficiencies above 20%, are performed. It is found that the QFLS in the perovskite lies significantly below its radiative limit for all intensities but also that the VOC is generally lower than the QFLS, violating one main assumption of the Shockley‐Queisser theory. This has far‐reaching implications for the applicability of some well‐established techniques, which use the VOC as a measure of the carrier densities in the absorber. By performing drift‐diffusion simulations, the intensity dependence of the QFLS, the QFLS‐VOC offset and the ideality factor are consistently explained by trap‐assisted recombination and energetic misalignment at the interfaces. Additionally, it is found that the saturation of the VOC at high intensities is caused by insufficient contact selectivity while heating effects are of minor importance. It is concluded that the analysis of the VOC does not provide reliable conclusions of the recombination pathways and that the knowledge of the QFLS‐VOC relation is of great importance.
The possibility to manufacture perovskite solar cells (PSCs) at low temperatures paves the way to flexible and lightweight photovoltaic (PV) devices manufactured via high-throughput roll-to-roll processes. In order to achieve higher power conversion efficiencies, it is necessary to approach the radiative limit via suppression of non-radiative recombination losses. Herein, we performed a systematic voltage loss analysis for a typical low-temperature processed, flexible PSC in n-i-p configuration using vacuum deposited C-60 as electron transport layer (ETL) and two-step hybrid vacuum-solution deposition for CH3NH3PbI3 perovskite absorber. We identified the ETL/absorber interface as a bottleneck in relation to non-radiative recombination losses, the quasi-Fermi level splitting (QFLS) decreases from similar to 1.23 eV for the bare absorber, just similar to 90 meV below the radiative limit, to similar to 1.10 eV when C-60 is used as ETL. To effectively mitigate these voltage losses, we investigated different interfacial modifications via vacuum deposited interlayers (BCP, B4PyMPM, 3TPYMB, and LiF). An improvement in QFLS of similar to 30-40 meV is observed after interlayer deposition and confirmed by comparable improvements in the open-circuit voltage after implementation of these interfacial modifications in flexible PSCs. Further investigations on absorber/hole transport layer (HTL) interface point out the detrimental role of dopants in Spiro-OMeTAD film (widely employed HTL in the community) as recombination centers upon oxidation and light exposure. [GRAPHICS] .
The Central Andes host large reserves of base and precious metals. The region represented, in 2017, an important part of the worldwide mining activity. Three principal types of deposits have been identified and studied: 1) porphyry type deposits extending from central Chile and Argentina to Bolivia, and Northern Peru, 2) iron oxide-copper-gold (IOCG) deposits, extending from central Peru to central Chile, and 3) epithermal tin polymetallic deposits extending from Southern Peru to Northern Argentina, which compose a large part of the deposits of the Bolivian Tin Belt (BTB). Deposits in the BTB can be divided into two major types: (1) tin-tungsten-zinc pluton-related polymetallic deposits, and (2) tin-silver-lead-zinc epithermal polymetallic vein deposits.
Mina Pirquitas is a tin-silver-lead-zinc epithermal polymetallic vein deposit, located in north-west Argentina, that used to be one of the most important tin-silver producing mine of the country. It was interpreted to be part of the BTB and it shares similar mineral associations with southern pluton related BTB epithermal deposits. Two major mineralization events related to three pulses of magmatic fluids mixed with meteoric water have been identified. The first event can be divided in two stages: 1) stage I-1 with quartz, pyrite, and cassiterite precipitating from fluids between 233 and 370 °C and salinity between 0 and 7.5 wt%, corresponding to a first pulse of fluids, and 2) stage I-2 with sphalerite and tin-silver-lead-antimony sulfosalts precipitating from fluids between 213 and 274 °C with salinity up to 10.6 wt%, corresponding to a new pulse of magmatic fluids in the hydrothermal system. The mineralization event II deposited the richest silver ores at Pirquitas. Event II fluids temperatures and salinities range between 190 and 252 °C and between 0.9 and 4.3 wt% respectively. This corresponds to the waning supply of magmatic fluids. Noble gas isotopic compositions and concentrations in ore-hosted fluid inclusions demonstrate a significant contribution of magmatic fluids to the Pirquitas mineralization although no intrusive rocks are exposed in the mine area.
Lead and sulfur isotopic measurements on ore minerals show that Pirquitas shares a similar signature with southern pluton related polymetallic deposits in the BTB. Furthermore, the major part of the sulfur isotopic values of sulfide and sulfosalt minerals from Pirquitas ranges in the field for sulfur derived from igneous rocks. This suggests that the main contribution of sulfur to the hydrothermal system at Pirquitas is likely to be magma-derived. The precise age of the deposit is still unknown but the results of wolframite dating of 2.9 ± 9.1 Ma and local structural observations suggest that the late mineralization event is younger than 12 Ma.
On doubling unconditionals
(2019)
Of Trees and Birds
(2019)
Gisbert Fanselow’s work has been invaluable and inspiring to many researchers working on syntax, morphology, and information structure, both from a theoretical and from an experimental perspective. This volume comprises a collection of articles dedicated to Gisbert on the occasion of his 60th birthday, covering a range of topics from these areas and beyond. The contributions have in common that in a broad sense they have to do with language structures (and thus trees), and that in a more specific sense they have to do with birds. They thus cover two of Gisbert’s major interests in- and outside of the linguistic world (and perhaps even at the interface).
Perovskite solar cells combine high carrier mobilities with long carrier lifetimes and high radiative efficiencies. Despite this, full devices suffer from significant nonradiative recombination losses, limiting their VOC to values well below the Shockley–Queisser limit. Here, recent advances in understanding nonradiative recombination in perovskite solar cells from picoseconds to steady state are presented, with an emphasis on the interfaces between the perovskite absorber and the charge transport layers. Quantification of the quasi‐Fermi level splitting in perovskite films with and without attached transport layers allows to identify the origin of nonradiative recombination, and to explain the VOC of operational devices. These measurements prove that in state‐of‐the‐art solar cells, nonradiative recombination at the interfaces between the perovskite and the transport layers is more important than processes in the bulk or at grain boundaries. Optical pump‐probe techniques give complementary access to the interfacial recombination pathways and provide quantitative information on transfer rates and recombination velocities. Promising optimization strategies are also highlighted, in particular in view of the role of energy level alignment and the importance of surface passivation. Recent record perovskite solar cells with low nonradiative losses are presented where interfacial recombination is effectively overcome—paving the way to the thermodynamic efficiency limit.
Supercapacitors are electrochemical energy storage devices with rapid charge/discharge rate and long cycle life. Their biggest challenge is the inferior energy density compared to other electrochemical energy storage devices such as batteries. Being the most widely spread type of supercapacitors, electrochemical double-layer capacitors (EDLCs) store energy by electrosorption of electrolyte ions on the surface of charged electrodes. As a more recent development, Na-ion capacitors (NICs) are expected to be a more promising tactic to tackle the inferior energy density due to their higher-capacity electrodes and larger operating voltage. The charges are simultaneously stored by ion adsorption on the capacitive-type cathode surface and via faradic process in the battery-type anode, respectively. Porous carbon electrodes are of great importance in these devices, but the paramount problems are the facile synthetic routes for high-performance carbons and the lack of fundamental understanding of the energy storage mechanisms. Therefore, the aim of the present dissertation is to develop novel synthetic methods for (nitrogen-doped) porous carbon materials with superior performance, and to reveal a deeper understanding energy storage mechanisms of EDLCs and NICs.
The first part introduces a novel synthetic method towards hierarchical ordered meso-microporous carbon electrode materials for EDLCs. The large amount of micropores and highly ordered mesopores endow abundant sites for charge storage and efficient electrolyte transport, respectively, giving rise to superior EDLC performance in different electrolytes. More importantly, the controversial energy storage mechanism of EDLCs employing ionic liquid (IL) electrolytes is investigated by employing a series of porous model carbons as electrodes. The results not only allow to conclude on the relations between the porosity and ion transport dynamics, but also deliver deeper insights into the energy storage mechanism of IL-based EDLCs which is different from the one usually dominating in solvent-based electrolytes leading to compression double-layers.
The other part focuses on anodes of NICs, where novel synthesis of nitrogen-rich porous carbon electrodes and their sodium storage mechanism are investigated. Free-standing fibrous nitrogen-doped carbon materials are synthesized by electrospinning using the nitrogen-rich monomer (hexaazatriphenylene-hexacarbonitrile, C18N12) as the precursor followed by condensation at high temperature. These fibers provide superior capacity and desirable charge/discharge rate for sodium storage. This work also allows insights into the sodium storage mechanism in nitrogen-doped carbons. Based on this mechanism, further optimization is done by designing a composite material composed of nitrogen-rich carbon nanoparticles embedded in conductive carbon matrix for a better charge/discharge rate. The energy density of the assembled NICs significantly prevails that of common EDLCs while maintaining the high power density and long cycle life.
Humans generate internal models of their environment to predict events in the world. As the environments change, our brains adjust to these changes by updating their internal models. Here, we investigated whether and how 9-month-old infants differentially update their models to represent a dynamic environment. Infants observed a predictable sequence of stimuli, which were interrupted by two types of cues. Following the update cue, the pattern was altered, thus, infants were expected to update their predictions for the upcoming stimuli. Because the pattern remained the same after the no-update cue, no subsequent updating was required. Infants showed an amplified negative central (Nc) response when the predictable sequence was interrupted. Late components such as the PSW were also evoked in response to unexpected stimuli; however, we found no evidence for a differential response to the informational value of surprising cues at later stages of processing. Infants rather learned that surprising cues always signal a change in the environment that requires updating. Interestingly, infants responded with an amplified neural response to the absence of an expected change, suggesting a top-down modulation of early sensory processing in infants. Our findings corroborate emerging evidence showing that infants build predictive models early in life.
A new micro/mesoporous hybrid clay nanocomposite prepared from kaolinite clay, Carica papaya seeds, and ZnCl2 via calcination in an inert atmosphere is presented. Regardless of the synthesis temperature, the specific surface area of the nanocomposite material is between ≈150 and 300 m2/g. The material contains both micro- and mesopores in roughly equal amounts. X-ray diffraction, infrared spectroscopy, and solid-state nuclear magnetic resonance spectroscopy suggest the formation of several new bonds in the materials upon reaction of the precursors, thus confirming the formation of a new hybrid material. Thermogravimetric analysis/differential thermal analysis and elemental analysis confirm the presence of carbonaceous matter. The new composite is stable up to 900 °C and is an efficient adsorbent for the removal of a water micropollutant, 4-nitrophenol, and a pathogen, E. coli, from an aqueous medium, suggesting applications in water remediation are feasible.
The German start-up subsidy (SUS) program for the unemployed has recently undergone a major make-over, altering its institutional setup, adding an additional layer of selection and leading to ambiguous predictions of the program’s effectiveness. Using propensity score matching (PSM) as our main empirical approach, we provide estimates of long-term effects of the post-reform subsidy on individual employment prospects and labor market earnings up to 40 months after entering the program. Our results suggest large and persistent long-term effects of the subsidy on employment probabilities and net earned income. These effects are larger than what was estimated for the pre-reform program. Extensive sensitivity analyses within the standard PSM framework reveal that the results are robust to different choices regarding the implementation of the weighting procedure and also with respect to deviations from the conditional independence assumption. As a further assessment of the results’ sensitivity, we go beyond the standard selection-on-observables approach and employ an instrumental variable setup using regional variation in the likelihood of receiving treatment. Here, we exploit the fact that the reform increased the discretionary power of local employment agencies in allocating active labor market policy funds, allowing us to obtain a measure of local preferences for SUS as the program of choice. The results based on this approach give rise to similar estimates. Thus, our results indicating that SUS are still an effective active labor market program after the reform do not appear to be driven by “hidden bias”.
Unfolding the history of one of the oldest human val-ues, the freedom of expression, while defining its limits, is a complicated task. Does freedom stop where hate starts? This very old dilemma is -now more than ever before- revealing new dimensions. Politicians and new laws aim at regulating free expression, while disagree-ments over such regulation gradually become a source of endless conflict in newly formed multicultural, inter-connected, and digitized societies. The example of the Network Enforcement Act is used to understand the idea of restrictive legal practices in Germany, but also to enlighten the fact that law is a human construction which was created in order to regulate communication among individuals. Alternative practices, to straight legal ones, are summarized to show other dimensions of regulating hate speech without involving top-down approaches. The article proposes the approach of re-storative justice as a combination of legal and medita-tive practices in cases of hate speech. One advantage of the restorative justice approach elaborated in this arti-cle is the potential to remedy the inner hate and the pain, both of the victim and perpetrator. Finally, reveal-ing parts of history and new aspects of the ‘hate speech-puzzle’, leads to a questioning of contemporary social structures that possibly generate hate itself.
Due to their ability to capture attention, emotional stimuli tend to benefit from enhanced perceptual processing, which can be helpful when such stimuli are task-relevant but hindering when they are task-irrelevant. Altered emotion-attention interactions have been associated with symptoms of affective disturbances, and emerging research focuses on improving emotion-attention interactions to prevent or treat affective disorders. In line with the Human Affectome Project's emphasis on linguistic components, we also analyzed the language used to describe attention-related aspects of emotion, and highlighted terms related to domains such as conscious awareness, motivational effects of attention, social attention, and emotion regulation. These terms were discussed within a broader review of available evidence regarding the neural correlates of (1) Emotion-Attention Interactions in Perception, (2) Emotion-Attention Interactions in Learning and Memory, (3) Individual Differences in Emotion-Attention Interactions, and (4) Training and Interventions to Optimize Emotion-Attention Interactions. This comprehensive approach enabled an integrative overview of the current knowledge regarding the mechanisms of emotion-attention interactions at multiple levels of analysis, and identification of emerging directions for future investigations.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Keywords
In an overt visual priming experiment, we investigate the role of orthography in native (L1) and non-native (L2) processing of German morphologically complex words. We compare priming effects for inflected and derived morphologically related prime-target pairs versus otherwise matched, purely orthographically related pairs. The results show morphological priming effects in both the L1 and L2 group, with no significant difference between inflection and derivation. However, L2 speakers, but not L1 speakers, also showed significant priming for orthographically related pairs. Our results support the claim that L2 speakers focus more on surface-level information such as orthography during visual word recognition. This can cause orthographic priming effects in morphologically related prime-target pairs, which may conceal L1-L2 differences in morphological processing.
Over the years, we developed highly selective fluorescent probes for K+ in water, which show K+-induced fluorescence intensity enhancements, lifetime changes, or a ratiometric behavior at two emission wavelengths (cf. Scheme 1, K1-K4). In this paper, we introduce selective fluorescent probes for Na+ in water, which also show Na+ induced signal changes, which are analyzed by diverse fluorescence techniques. Initially, we synthesized the fluorescent probes 2, 4, 5, 6 and 10 for a fluorescence analysis by intensity enhancements at one wavelength by varying the Na+ responsive ionophore unit and the fluorophore moiety to adjust different K-d values for an intra- or extracellular Na+ analysis. Thus, we found that 2, 4 and 5 are Na+ selective fluorescent tools, which are able to measure physiologically important Na+ levels at wavelengths higher than 500 nm. Secondly, we developed the fluorescent probes 7 and 8 to analyze precise Na+ levels by fluorescence lifetime changes. Herein, only 8 (K-d=106 mm) is a capable fluorescent tool to measure Na+ levels in blood samples by lifetime changes. Finally, the fluorescent probe 9 was designed to show a Na+ induced ratiometric fluorescence behavior at two emission wavelengths. As desired, 9 (K-d=78 mm) showed a ratiometric fluorescence response towards Na+ ions and is a suitable tool to measure physiologically relevant Na+ levels by the intensity change of two emission wavelengths at 404 nm and 492 nm.
Within the framework of precision agriculture, the determination of various soil properties is moving into focus, especially the demand for sensors suitable for in-situ measurements. Energy-dispersive X-ray fluorescence (EDXRF) can be a powerful tool for this purpose. In this study a huge diverse soil set (n = 598) from 12 different study sites in Germany was analysed with EDXRF. First, a principal component analysis (PCA) was performed to identify possible similarities among the sample set. Clustering was observed within the four texture classes clay, loam, silt and sand, as clay samples contain high and sandy soils low iron mass fractions. Furthermore, the potential of uni- and multivariate data evaluation with partial least squares regression (PLSR) was assessed for accurate determination of nutrients in German agricultural samples using two calibration sample sets. Potassium and iron were chosen for testing the performance of both models. Prediction of these nutrients in 598 German soil samples with EDXRF was more accurate using PLSR which is confirmed by a better overall averaged deviation and PLSR should therefore be preferred.
Sphingolipids are a class of lipids that share a sphingoid base backbone. They exert various effects in eukaryotes, ranging from structural roles in plasma membranes to cellular signaling. De novo sphingolipid synthesis takes place in the endoplasmic reticulum (ER), where the condensation of the activated C₁₆ fatty acid palmitoyl-CoA and the amino acid L-serine is catalyzed by serine palmitoyltransferase (SPT). The product, 3-ketosphinganine, is then converted into more complex sphingolipids by additional ER-bound enzymes, resulting in the formation of ceramides. Since sphingolipid homeostasis is crucial to numerous cellular functions, improved assessment of sphingolipid metabolism will be key to better understanding several human diseases. To date, no assay exists capable of monitoring de novo synthesis sphingolipid in its entirety. Here, we have established a cell-free assay utilizing rat liver microsomes containing all the enzymes necessary for bottom-up synthesis of ceramides. Following lipid extraction, we were able to track the different intermediates of the sphingolipid metabolism pathway, namely 3-ketosphinganine, sphinganine, dihydroceramide, and ceramide. This was achieved by chromatographic separation of sphingolipid metabolites followed by detection of their accurate mass and characteristic fragmentations through high-resolution mass spectrometry and tandem-mass spectrometry. We were able to distinguish, unequivocally, between de novo synthesized sphingolipids and intrinsic species, inevitably present in the microsome preparations, through the addition of stable isotope-labeled palmitate-d₃ and L-serine-d₃. To the best of our knowledge, this is the first demonstration of a method monitoring the entirety of ER-associated sphingolipid biosynthesis. Proof-of-concept data was provided by modulating the levels of supplied cofactors (e.g., NADPH) or the addition of specific enzyme inhibitors (e.g., fumonisin B₁). The presented microsomal assay may serve as a useful tool for monitoring alterations in sphingolipid de novo synthesis in cells or tissues. Additionally, our methodology may be used for metabolism studies of atypical substrates – naturally occurring or chemically tailored – as well as novel inhibitors of enzymes involved in sphingolipid de novo synthesis.
The growing energy demand of the modern economies leads to the increased consumption of fossil fuels in form of coal, oil, and natural gases, as the mains sources. The combustion of these carbon-based fossil fuels is inevitably producing greenhouse gases, especially CO2. Approaches to tackle the CO2 problem are to capture it from the combustion sources or directly from air, as well as to avoid CO2 production in energy consuming sources (e.g., in the refrigeration sector). In the former, relatively low CO2 concentrations and competitive adsorption of other gases is often leading to low CO2 capacities and selectivities. In both approaches, the interaction of gas molecules with porous materials plays a key role. Porous carbon materials possess unique properties including electric conductivity, tunable porosity, as well as thermal and chemical stability. Nevertheless, pristine carbon materials offer weak polarity and thus low CO2 affinity. This can be overcome by nitrogen doping, which enhances the affinity of carbon materials towards acidic or polar guest molecules (e.g., CO2, H2O, or NH3). In contrast to heteroatom-free materials, such carbon materials are in most cases “noble”, that is, they oxidize other matter rather than being oxidized due to the very positive working potential of their electrons. The challenging task here is to achieve homogenous distribution of significant nitrogen content with similar bonding motives throughout the carbon framework and a uniform pore size/distribution to maximize host-guest interactions. The aim of this thesis is the development of novel synthesis pathways towards nitrogen-doped nanoporous noble carbon materials with precise design on a molecular level and understanding of their structure-related performance in energy and environmental applications, namely gas adsorption and electrochemical energy storage.
A template-free synthesis approach towards nitrogen-doped noble microporous carbon materials with high pyrazinic nitrogen content and C2N-type stoichiometry was established via thermal condensation of a hexaazatriphenylene derivative. The materials exhibited high uptake of guest molecules, such as H2O and CO2 at low concentrations, as well as moderate CO2/N2 selectivities. In the following step, the CO2/N2 selectivity was enhanced towards molecular sieving of CO2 via kinetic size exclusion of N2. The precise control over the condensation degree, and thus, atomic construction and porosity of the resulting materials led to remarkable CO2/N2 selectivities, CO2 capacities, and heat of CO2 adsorption. The ultrahydrophilic nature of the pore walls and the narrow microporosity of these carbon materials served as ideal basis for the investigation of interface effects with more polar guest molecules than CO2, namely H2O and NH3.
H2O vapor physisorption measurements, as well as NH3-temperature programmed desorption and thermal response measurements showed exceptionally high affinity towards H2O vapor and NH3 gas. Another series of nitrogen-doped carbon materials was synthesized by direct condensation of a pyrazine-fused conjugated microporous polymer and their structure-related performance in electrochemical energy storage, namely as anode materials for sodium-ion battery, was investigated.
All in all, the findings in this thesis exemplify the value of molecularly designed nitrogen-doped carbon materials with remarkable heteroatom content implemented as well-defined structure motives. The simultaneous adjustment of the porosity renders these materials suitable candidates for fundamental studies about the interactions between nitrogen-doped carbon materials and different guest species.
The North China Plain (NCP) is one of the most productive and intensive agricultural regions in China. High doses of mineral nitrogen (N) fertiliser, often combined with flood irrigation, are applied, resulting in N surplus, groundwater depletion and environmental pollution. The objectives of this thesis were to use the HERMES model to simulate the N cycle in winter wheat (Triticum aestivum L.)–summer maize (Zea mays L.) double crop rotations and show the performance of the HERMES model, of the new ammonia volatilisation sub-module and of the new nitrification inhibition tool in the NCP. Further objectives were to assess the models potential to save N and water on plot and county scale, as well as on short and long-term. Additionally, improved management strategies with the help of a model-based nitrogen fertiliser recommendation (NFR) and adapted irrigation, should be found.
Results showed that the HERMES model performed well under growing conditions of the NCP and was able to describe the relevant processes related to soil–plant interactions concerning N and water during a 2.5 year field experiment. No differences in grain yield between the real-time model-based NFR and the other treatments of the experiments on plot scale in Quzhou County could be found. Simulations with increasing amounts of irrigation resulted in significantly higher N leaching, higher N requirements of the NFR and reduced yields. Thus, conventional flood irrigation as currently practised by the farmers bears great uncertainties and exact irrigation amounts should be known for future simulation studies. In the best-practice scenario simulation on plot-scale, N input and N leaching, but also irrigation water could be reduced strongly within 2 years. Thus, the model-based NFR in combination with adapted irrigation had the highest potential to reduce nitrate leaching, compared to farmers practice and mineral N (Nmin)-reduced treatments. Also the calibrated and validated ammonia volatilisation sub-module of the HERMES model worked well under the climatic and soil conditions of northern China. Simple ammonia volatilisation approaches gave also satisfying results compared to process-oriented approaches. During the simulation with Ammonium sulphate Nitrate with nitrification inhibitor (ASNDMPP) ammonia volatilisation was higher than in the simulation without nitrification inhibitor, while the result for nitrate leaching was the opposite. Although nitrification worked well in the model, nitrification-born nitrous oxide emissions should be considered in future. Results of the simulated annual long-term (31 years) N losses in whole Quzhou County in Hebei Province were 296.8 kg N ha−1 under common farmers practice treatment and 101.7 kg N ha−1 under optimised treatment including NFR and automated irrigation (OPTai). Spatial differences in simulated N losses throughout Quzhou County, could only be found due to different N inputs. Simulations of an optimised treatment, could save on average more than 260 kg N ha−1a−1 from fertiliser input and 190 kg N ha−1a−1 from N losses and around 115.7 mm a−1 of water, compared to farmers practice. These long-term simulation results showed lower N and water saving potential, compared to short-term simulations and underline the necessity of long-term simulations to overcome the effect of high initial N stocks in soil.
Additionally, the OPTai worked best on clay loam soil except for a high simulated denitrification loss, while the simulations using farmers practice irrigation could not match the actual water needs resulting in yield decline, especially for winter wheat. Thus, a precise adaption of management to actual weather conditions and plant growth needs is necessary for future simulations. However, the optimised treatments did not seem to be able to maintain the soil organic matter pools, even with full crop residue input. Extra organic inputs seem to be required to maintain soil quality in the optimised treatments.
HERMES is a relatively simple model, with regard to data input requirements, to simulate the N cycle. It can offer interpretation of management options on plot, on county and regional scale for extension and research staff. Also in combination with other N and water saving methods the model promises to be a useful tool.
Humus forms are a distinctive morphological indicator of soil organic matter decomposition. The spatial distribution of humus forms depends on environmental factors such as topography, climate and vegetation. In montane and subalpine forests, environmental influences show a high spatial heterogeneity, which is reflected by a high spatial variability of humus forms. This study aims at examining spatial patterns of humus forms and their dependence on the spatial scale in a high mountain forest environment (Val di Sole/Val di Rabbi, Trentino, Italian Alps). On the basis of the distributions of environmental covariates across the study area, we described humus forms at the local scale (six sampling sites), slope scale (60 sampling sites) and landscape scale (30 additional sampling sites). The local variability of humus forms was analyzed with regard to the ground cover type. At the slope and landscape scale, spatial patterns of humus forms were modeled applying random forests and ordinary kriging of the model residuals. The results indicate that the occurrence of the humus form classes Mull, Mullmoder, Moder, Amphi and Eroded Moder generally depends on the topographical position. Local-scale patterns are mostly related to micro-topography (local accumulation and erosion sites) and ground cover, whereas slope-scale patterns are mainly connected with slope exposure and elevation. Patterns at the landscape scale show a rather irregular distribution, as spatial models at this scale do not account for local to slope-scale variations of humus forms. Moreover, models at the slope scale perform distinctly better than at the landscape scale. In conclusion, the results of this study highlight that landscape-scale predictions of humus forms should be accompanied by local- and slope-scale studies in order to enhance the general understanding of humus form patterns.
Continuous insight into biological processes has led to the development of large-scale, mechanistic systems biology models of pharmacologically relevant networks. While these models are typically designed to study the impact of diverse stimuli or perturbations on multiple system variables, the focus in pharmacological research is often on a specific input, e.g., the dose of a drug, and a specific output related to the drug effect or response in terms of some surrogate marker.
To study a chosen input-output pair, the complexity of the interactions as well as the size of the models hinders easy access and understanding of the details of the input-output relationship.
The objective of this thesis is the development of a mathematical approach, in specific a model reduction technique, that allows (i) to quantify the importance of the different state variables for a given input-output relationship, and (ii) to reduce the dynamics to its essential features -- allowing for a physiological interpretation of state variables as well as parameter estimation in the statistical analysis of clinical data. We develop a model reduction technique using a control theoretic setting by first defining a novel type of time-limited controllability and observability gramians for nonlinear systems. We then show the superiority of the time-limited generalised gramians for nonlinear systems in the context of balanced truncation for a benchmark system from control theory.
The concept of time-limited controllability and observability gramians is subsequently used to introduce a state and time-dependent quantity called the input-response (ir) index that quantifies the importance of state variables for a given input-response relationship at a particular time.
We subsequently link our approach to sensitivity analysis, thus, enabling for the first time the use of sensitivity coefficients for state space reduction. The sensitivity based ir-indices are given as a product of two sensitivity coefficients. This allows not only for a computational more efficient calculation but also for a clear distinction of the extent to which the input impacts a state variable and the extent to which a state variable impacts the output.
The ir-indices give insight into the coordinated action of specific state variables for a chosen input-response relationship.
Our developed model reduction technique results in reduced models that still allow for a mechanistic interpretation in terms of the quantities/state variables of the original system, which is a key requirement in the field of systems pharmacology and systems biology and distinguished the reduced models from so-called empirical drug effect models. The ir-indices are explicitly defined with respect to a reference trajectory and thereby dependent on the initial state (this is an important feature of the measure). This is demonstrated for an example from the field of systems pharmacology, showing that the reduced models are very informative in their ability to detect (genetic) deficiencies in certain physiological entities. Comparing our novel model reduction technique to the already existing techniques shows its superiority.
The novel input-response index as a measure of the importance of state variables provides a powerful tool for understanding the complex dynamics of large-scale systems in the context of a specific drug-response relationship. Furthermore, the indices provide a means for a very efficient model order reduction and, thus, an important step towards translating insight from biological processes incorporated in detailed systems pharmacology models into the population analysis of clinical data.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair.
We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
Domestication syndrome has resulted in the large loss of genetic variation of crop plants. Because of such genetic loss, productivity of various beneficial secondary (specialized) metabolites that protect against abiotic/biotic stresses, has been narrowed in many domesticated crops. Many key regulators or structural genes of secondary metabolic pathways in the domesticated as well as wild tomatoes are still largely unknown. In recent studies, metabolic quantitative trait loci (mQTL) analysis using the population of introgression lines (ILs), each containing a single introgression from Solanum pennellii (wild tomato) in the genetic background of domesticated tomato (M82, Solanum lycopersicum), has been used for investigation of metabolic regulation and key genes involved in both primary and secondary metabolism. In this thesis, three research projects, i) understanding of metabolic linkage between branched chain amino acids (BCAAs) and secondary metabolism using antisense lines of BCAAs metabolic genes, ii) investigation of novel key genes involved in tomato secondary metabolism and fruit ripening, iii) mapping of drought stress responsive mQTLs in tomato, are presented and discussed. In the first part, metabolic linkage between leucine and secondary metabolism is investigated by analyzing antisense lines of four key genes (ketol-acid reductoisomerase, KARI; dihydroxy-acid dehydratase, DHAD; isopropylmalate dehydratase, IPMD and branched chain aminotransferases1, BCAT1) found previously in mQTL of leucine contents. Obtained results indicate that KARI might be a rate limiting enzyme for iC5 acyl-sucrose synthesis in young leaf but not in red ripe fruits. By integrating obtained results with previous reports, inductive metabolic linkage between BCAAs and other secondary metabolic pathways at DHAD transcriptional levels in fruit is proposed. In the second part, candidate genes that are involved in secondary metabolism and fruit ripening in tomato were found by the approach of expression quantitative trait loci (eQTL) analysis. To predict functions of those candidate genes, functional validation by virus induced gene silencing and transient overexpression were performed. Results obtained by analyzing T0 overexpression and artificial miRNA lines for some of those candidates confirm their predicted functions, for example involved in fruit ripening (WD40, Solyc04g005020) and iC5 acyl-sucrose synthesis (P450, Solyc03g111940). In the third part, mapping of drought stress responsive mQTLs was performed using 57 S. pennellii ILs population. Evaluation of genetic architecture of mQTL analysis resulted in identifying drought responsive ILs (11-2, 8-3-1, 10-1-1 and 3-1). Location of well characterized regulators in these ILs helped to filter potential new key genes involved in drought stress tolerance. Obtained results suggests us our approaches could be viable for narrowing down potential candidates involved in creating interspecific variation in secondary metabolite content and at the level of fruit ripening.
There is evidence both for mental number representations along a horizontal mental number line with larger numbers to the right of smaller numbers (for Western cultures) and a physically grounded, vertical representation where “more is up.” Few studies have compared effects in the horizontal and vertical dimension and none so far have combined both dimensions within a single paradigm where numerical magnitude was task-irrelevant and none of the dimensions was primed by a response dimension. We now investigated number representations over both dimensions, building on findings that mental representations of numbers and space co-activate each other. In a Go/No-go experiment, participants were auditorily primed with a relatively small or large number and then visually presented with quasi-randomly distributed distractor symbols and one Arabic target number (in Go trials only). Participants pressed a central button whenever they detected the target number and elsewise refrained from responding. Responses were not more efficient when small numbers were presented to the left and large numbers to the right. However, results indicated that large numbers were associated with upper space more strongly than small numbers. This suggests that in two-dimensional space when no response dimension is given, numbers are conceptually associated with vertical, but not horizontal space.
Membrane adhesion is a fundamental biological process in which membranes are attached to neighboring membranes or surfaces. Membrane adhesion emerges from a complex interplay between the binding of membrane-anchored receptors/ligands and the membrane properties. In this work, we study membrane adhesion mediated by lipid-anchored saccharides using microsecond-long full-atomistic molecular dynamics simulations. Motivated by neutron scattering experiments on membrane adhesion via lipid-anchored saccharides, we investigate the role of LeX, Lac1, and Lac2 saccharides and membrane fluctuations in membrane adhesion.
We study the binding of saccharides in three different systems: for saccharides in water, for saccharides anchored to essentially planar membranes at fixed separations, and for saccharides anchored to apposing fluctuating membranes. Our simulations of two saccharides in water indicate that the saccharides engage in weak interactions to form dimers. We find that the binding occurs in a continuum of bound states instead of a certain number of well-defined bound structures, which we term as "diffuse binding".
The binding of saccharides anchored to essentially planar membranes strongly depends on separation of the membranes, which is fixed in our simulation system. We show that the binding constants for trans-interactions of two lipid-anchored saccharides monotonically decrease with increasing separation. Saccharides anchored to the same membrane leaflet engage in cis-interactions with binding constants comparable to the trans-binding constants at the smallest membrane separations. The interplay of cis- and trans-binding can be investigated in simulation systems with many lipid-anchored saccharides. For Lac2, our simulation results indicate a positive cooperativity of trans- and cis-binding. In this cooperative binding the trans-binding constant is enhanced by the cis-interactions. For LeX, in contrast, we observe no cooperativity between trans- and cis-binding. In addition, we determine the forces generated by trans-binding of lipid-anchored saccharides in planar membranes from the binding-induced deviations of the lipid-anchors. We find that the forces acting on trans-bound saccharides increase with increasing membrane separation to values of the order of 10 pN.
The binding of saccharides anchored to the fluctuating membranes results from an interplay between the binding properties of the lipid-anchored saccharides and membrane fluctuations. Our simulations, which have the same average separation of the membranes as obtained from the neutron scattering experiments, yield a binding constant larger than in planar membranes with the same separation. This result demonstrates that membrane fluctuations play an important role at average membrane separations which are seemingly too large for effective binding. We further show that the probability distribution of the local separation can be well approximated by a Gaussian distribution. We calculate the relative membrane roughness and show that our results are in good agreement with the roughness values reported from the neutron scattering experiments.