Refine
Year of publication
Document Type
- Article (235)
- Doctoral Thesis (144)
- Conference Proceeding (122)
- Postprint (69)
- Working Paper (39)
- Monograph/Edited Volume (16)
- Preprint (6)
- Review (6)
- Master's Thesis (5)
- Habilitation Thesis (2)
Language
- English (646) (remove)
Keywords
- climate change (8)
- USA (7)
- United States (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- football (5)
- modern Jewish history (5)
Institute
- Extern (646) (remove)
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
Conventional energy sources are diminishing and non-renewable, take million years to form and cause environmental degradation. In the 21st century, we have to aim at achieving sustainable, environmentally friendly and cheap energy supply by employing renewable energy technologies associated with portable energy storage devices. Lithium-ion batteries can repeatedly generate clean energy from stored materials and convert reversely electric into chemical energy. The performance of lithium-ion batteries depends intimately on the properties of their materials. Presently used battery electrodes are expensive to be produced; they offer limited energy storage possibility and are unsafe to be used in larger dimensions restraining the diversity of application, especially in hybrid electric vehicles (HEVs) and electric vehicles (EVs). This thesis presents a major progress in the development of LiFePO4 as a cathode material for lithium-ion batteries. Using simple procedure, a completely novel morphology has been synthesized (mesocrystals of LiFePO4) and excellent electrochemical behavior was recorded (nanostructured LiFePO4). The newly developed reactions for synthesis of LiFePO4 are single-step processes and are taking place in an autoclave at significantly lower temperature (200 deg. C) compared to the conventional solid-state method (multi-step and up to 800 deg. C). The use of inexpensive environmentally benign precursors offers a green manufacturing approach for a large scale production. These newly developed experimental procedures can also be extended to other phospho-olivine materials, such as LiCoPO4 and LiMnPO4. The material with the best electrochemical behavior (nanostructured LiFePO4 with carbon coating) was able to delive a stable 94% of the theoretically known capacity.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
Background:
Children’s spontaneous focusing on numerosity (SFON) is related to numerical skills. This study aimed to examine (1) the developmental trajectory of SFON and (2) the interrelations between SFON and early numerical skills at pre-school as well as their influence on arithmetical skills at school.
Method:
Overall, 1868 German pre-school children were repeatedly assessed until second grade. Nonverbal intelligence, visual attention, visuospatial working memory, SFON and numerical skills were assessed at age five (M = 63 months, Time 1) and age six (M = 72 months, Time 2), and arithmetic was assessed at second grade (M = 95 months, Time 3).
Results:
SFON increased significantly during pre-school. Path analyses revealed interrelations between SFON and several numerical skills, except number knowledge. Magnitude estimation and basic calculation skills (Time 1 and Time 2), and to a small degree number knowledge (Time 2), contributed directly to arithmetic in second grade. The connection between SFON and arithmetic was fully mediated by magnitude estimation and calculation skills at pre-school.
Conclusion:
Our results indicate that SFON first and foremost influences deeper understanding of numerical concepts at pre-school and—in contrast to previous findings –affects only indirectly children’s arithmetical development at school.
We present a concept of better integration of practical teaching in student teacher education in Computer Science. As an introduction to the workshop different possible scenarios are discussed on the basis of examples. Afterwards workshop participants will have the opportunity to discuss the application of the aconcepts in other settings.
The emergence of information extraction (IE) oriented pattern engines has been observed during the last decade. Most of them exploit heavily finite-state devices. This paper introduces ExPRESS – a new extraction pattern engine, whose rules are regular expressions over flat feature structures. The underlying pattern language is a blend of two previously introduced IE oriented pattern formalisms, namely, JAPE, used in the widely known GATE system, and the unificationbased XTDL formalism used in SProUT. A brief and technical overview of ExPRESS, its pattern language and the pool of its native linguistic components is given. Furthermore, the implementation of the grammar interpreter is addressed too.
Deans at Institutions of Higher Education are seldom recipients of effective or specific professional management training, institutional mentorship, and coaching despite an increasing demand on them to play a more dynamic leadership role in the face of ever-changing local and global challenges. To address this deficiency, the inaugural Malaysian Chapter of the International Deans’ Course (MyIDC) was held in three parts over 2019 and 2020. In this paper, findings related to feedback on the programme are presented and discussed. Responses from the participants from two sets of surveys, and written feedback provided by two IDC international trainers involved in MyIDC were analysed. These reveal potential areas of improvement for the forthcoming MyIDC programme, such as in terms of planning and organisation, duration, content, and delivery. The article explores the lessons learnt from the MyIDC 2019/2020 training programme and discusses the improvements that can be made arising from the feedback received.
Basic psychological needs theory postulates that a social environment that satisfies individuals’ three basic psychological needs of autonomy, competence, and relatedness leads to optimal growth and well-being. On the other hand, the frustration of these needs is associated with ill-being and depressive symptoms foremost investigated in non-clinical samples; yet, there is a paucity of research on need frustration in clinical samples. Survey data were compared between adult individuals with major depressive disorder (MDD; n = 115; 48.69% female; 38.46 years, SD = 10.46) with those of a non-depressed comparison sample (n = 201; 53.23% female; 30.16 years, SD = 12.81). Need profiles were examined with a linear mixed model (LMM). Individuals with depression reported higher levels of frustration and lower levels of satisfaction in relation to the three basic psychological needs when compared to non-depressed adults. The difference between depressed and non-depressed groups was significantly larger for frustration than satisfaction regarding the needs for relatedness and competence. LMM correlation parameters confirmed the expected positive correlation between the three needs. This is the first study showing substantial differences in need-based experiences between depressed and non-depressed adults. The results confirm basic assumptions of the self-determination theory and have preliminary implications in tailoring therapy for depression.
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
4-Phenylphenoxazinones were isolated after biomimetic oxidation, using diphenoloxidases of insect cuticle, mushroom tyrosinase, or after autoxidation of N-acetyldopamine (Image ) in the presence of β-alanine, β-alanine methyl ester or N-acetyl-L-lysine. They are formed presumably by addition of 2-aminoalkyl-5-alkylphenols to the o-quinone of biphenyltetrol which, in turn, arises from oxidative coupling of. The structures of present the first examples for the assembly of reasonably stable intermediates in the rather complex process of chemical modifications of aliphatic amino acid residues by o-quinones.
Comprior
(2021)
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Cellulose and chitin are the most abundant polymeric, organic carbon source globally. Thus, microbes degrading these polymers significantly influence global carbon cycling and greenhouse gas production. Fungi are recognized as important for cellulose decomposition in terrestrial environments, but are far less studied in marine environments, where bacterial organic matter degradation pathways tend to receive more attention. In this study, we investigated the potential of fungi to degrade kelp detritus, which is a major source of cellulose in marine systems. Given that kelp detritus can be transported considerable distances in the marine environment, we were specifically interested in the capability of endophytic fungi, which are transported with detritus, to ultimately contribute to kelp detritus degradation. We isolated 10 species and two strains of endophytic fungi from the kelp Ecklonia radiata. We then used a dye decolorization assay to assess their ability to degrade organic polymers (lignin, cellulose, and hemicellulose) under both oxic and anoxic conditions and compared their degradation ability with common terrestrial fungi. Under oxic conditions, there was evidence that Ascomycota isolates produced cellulose-degrading extracellular enzymes (associated with manganese peroxidase and sulfur-containing lignin peroxidase), while Mucoromycota isolates appeared to produce both lignin and cellulose-degrading extracellular enzymes, and all Basidiomycota isolates produced lignin-degrading enzymes (associated with laccase and lignin peroxidase). Under anoxic conditions, only three kelp endophytes degraded cellulose. We concluded that kelp fungal endophytes can contribute to cellulose degradation in both oxic and anoxic environments. Thus, endophytic kelp fungi may play a significant role in marine carbon cycling via polymeric organic matter degradation.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
Environmental pollution by microplastics has become a severe problem in terrestrial and aquatic ecosystems and, according to actual prognoses, problems will further increase in the future. Therefore, assessing and quantifying the risk for the biota is crucial. Standardized short-term toxicological procedures as well as methods quantifying potential toxic effects over the whole life span of an animal are required. We studied the effect of the microplastic polystyrene on the survival and reproduction of a common freshwater invertebrate, the rotifer Brachionus calyciflorus, at different timescales. We used pristine polystyrene spheres of 1, 3, and 6 µm diameter and fed them to the animals together with food algae in different ratios ranging from 0 to 50% nonfood particles. As a particle control, we used silica to distinguish between a pure particle effect and a plastic effect. After 24 h, no toxic effect was found, neither with polystyrene nor with silica. After 96 h, a toxic effect was detectable for both particle types. The size of the particles played a negligible role. Studying the long-term effect by using life table experiments, we found a reduced reproduction when the animals were fed with 3 µm spheres together with similar-sized food algae. We conclude that the fitness reduction is mainly driven by the dilution of food by the nonfood particles rather than by a direct toxic effect.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
Here, we demonstrate the utility of native membrane derived vesicles (nMVs) as tools for expeditious electrophysiological analysis of membrane proteins. We used a cell-free (CF) and a cell-based (CB) approach for preparing protein-enriched nMVs. We utilized the Chinese Hamster Ovary (CHO) lysate-based cell-free protein synthesis (CFPS) system to enrich ER-derived microsomes in the lysate with the primary human cardiac voltage-gated sodium channel 1.5 (hNaV1.5; SCN5A) in 3 h. Subsequently, CB-nMVs were isolated from fractions of nitrogen-cavitated CHO cells overexpressing the hNaV1.5. In an integrative approach, nMVs were micro-transplanted into Xenopus laevis oocytes. CB-nMVs expressed native lidocaine-sensitive hNaV1.5 currents within 24 h; CF-nMVs did not elicit any response. Both the CB- and CF-nMV preparations evoked single-channel activity on the planar lipid bilayer while retaining sensitivity to lidocaine application. Our findings suggest a high usability of the quick-synthesis CF-nMVs and maintenance-free CB-nMVs as ready-to-use tools for in-vitro analysis of electrogenic membrane proteins and large, voltage-gated ion channels.
In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes.
First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space.
We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild.
We discuss how this is relevant to the black hole stability problem.
In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case.
We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime.
In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.
In this work an extension of CSSR algorithm using Maximum Entropy Models is introduced. Preliminary experiments to perform Named Entity Recognition with this new system are presented.
Dynamical simulation of the “velocity-porosity” reduction in observed strength of stellar wind lines
(2007)
I use dynamical simulations of the line-driven instability to examine the potential role of the resulting flow structure in reducing the observed strength of wind absorption lines. Instead of the porosity length formalism used to model effects on continuum absorption, I suggest reductions in line strength can be better characterized in terms of a velocity clumping factor that is insensitive to spatial scales. Examples of dynamic spectra computed directly from instability simulations do exhibit a net reduction in absorption, but only at a modest 10-20% level that is well short of the ca. factor 10 required by recent analyses of PV lines.
Higher education institutions in Guinea face many challenges, including reporting responsibilities, globalisation, and massification. Institutional evaluations of higher education and research institutions in 2013 could not initiate the implementation of change processes within the institutions. Recently, however, various initiatives have been started to change this situation with the purpose to sensitise and raise awareness and capabilities for quality assurance structures in Guinean HEIs. So far, the emphasis has been put on quality enhancement in higher education, especially on teaching evaluation, curriculum development, as well as on establishing quality assurance structures. This article gives an overview of the state of play and takes stock of the activities that have been initiated to set up quality assurance mechanisms in higher education and research institutions, and presents perspectives for further development of the quality approach in Guinea. The project ‘Quality Assurance Multiplication 2017-2018’ serves as an example to describe approaches and activities in setting up stable quality assurance structures, and to strengthen and raise awareness for a ‘quality culture’.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
We collect a network dataset of tenured economics faculty in Austria, Germany and Switzerland. We rank the 100 institutions included with a minimum violation ranking. This ranking is positively and significantly correlated with the Times Higher Education ranking of economics institutions. According to the network ranking, individuals on average go down about 23 ranks from their doctoral institution to their employing institution. While the share of females in our dataset is only 15%, we do not observe a significant gender hiring gap (a difference in rank changes between male and female faculty). We conduct a robustness check with the Handelsblatt and the Times Higher Education ranking. According to these rankings, individuals on average go down only about two ranks. We do not observe a significant gender hiring gap using these two rankings (although the dataset underlying this analysis is small and these estimates are likely to be noisy). Finally, we discuss the limitations of the network ranking in our context.
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
Acquiring Syntactic Variability: The Production of Wh-Questions in Children and Adults Speaking Akan
(2020)
This paper investigates the predictions of the Derivational Complexity Hypothesis by studying the acquisition of wh-questions in 4- and 5-year-old Akan-speaking children in an experimental approach using an elicited production and an elicited imitation task. Akan has two types of wh-question structures (wh-in-situ and wh-ex-situ questions), which allows an investigation of children’s acquisition of these two question structures and their preferences for one or the other. Our results show that adults prefer to use wh-ex-situ questions over wh-in-situ questions. The results from the children show that both age groups have the two question structures in their linguistic repertoire. However, they differ in their preferences in usage in the elicited production task: while the 5-year-olds preferred the wh-in-situ structure over the wh-ex-situ structure, the 4-year-olds showed a selective preference for the wh-in-situ structure in who-questions. These findings suggest a developmental change in wh-question preferences in Akan-learning children between 4 and 5 years of age with a so far unobserved u-shaped developmental pattern. In the elicited imitation task, all groups showed a strong tendency to maintain the structure of in-situ and ex-situ questions in repeating grammatical questions. When repairing ungrammatical ex-situ questions, structural changes to grammatical in-situ questions were hardly observed but the insertion of missing morphemes while keeping the ex-situ structure. Together, our findings provide only partial support for the Derivational Complexity Hypothesis.
The development of phonetic codes in memory of 141 pairs of normal and disabled readers from 7.8 to 16.8 years of age was tested with a task adapted from L. S. Mark, D. Shankweiler, I. Y. Liberman, and C. A. Fowler (Memory & Cognition, 1977, 5, 623–629) that measured false-positive errors in recognition memory for foil words which rhymed with words in the memory list versus foil words that did not rhyme. Our younger subjects replicated Mark et al., showing a larger difference between rhyming and nonrhyming false-positive errors for the normal readers. The older disabled readers' phonetic effect was comparable to that of the younger normal readers, suggesting a developmental lag in their use of phonetic coding in memory. Surprisingly, the normal readers' phonetic effect declined with age in the recognition task, but they maintained a significant advantage across age in the auditory WISC-R digit span recall test, and a test of phonological nonword decoding. The normals' decline with age in rhyming confusion may be due to an increase in the precision of their phonetic codes.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
Chloroplasts as bioreactors : high-yield production of active bacteriolytic protein antibiotics
(2008)
Plants, more precisely their chloroplasts with their bacterial-like expression machinery inherited from their cyanobacterial ancestors, can potentially offer a cheap expression system for proteinaceous pharmaceuticals. This system would be easily scalable and provides appropriate safety due to chloroplasts maternal inheritance. In this work, it was shown that three phage lytic enzymes (Pal, Cpl-1 and PlyGBS) could be successfully expressed at very high levels and with high stability in tobacco chloroplasts. PlyGBS expression reached an amount of foreign protein accumulation (> 70% TSP) that has never been obtained before. Although the high expression levels of PlyGBS caused a pale green phenotype with retarded growth, presumably due to exhaustion of plastid protein synthesis capacity, development and seed production were not impaired under greenhouse conditions. Since Pal and Cpl-1 showed toxic effects when expressed in E. coli, a special plastid transformation vector (pTox) was constructed to allow DNA amplification in bacteria. The construction of the pTox transformation vector allowing a recombinase-mediated deletion of an E. coli transcription block in the chloroplast, leading to an increase of foreign protein accumulation to up to 40% of TSP for Pal and 20% of TSP for Cpl-1. High dose-dependent bactericidal efficiency was shown for all three plant-derived lytic enzymes using their pathogenic target bacteria S. pyogenes and S. pneumoniae. Confirmation of specificity was obtained for the endotoxic proteins Pal and Cpl-1 by application to E. coli cultures. These results establish tobacco chloroplasts as a new cost-efficient and convenient production platform for phage lytic enzymes and address the greatest obstacle for clinical application. The present study is the first report of lysin production in a non-bacterial system. The properties of chloroplast-produced lysins described in this work, their stability, high accumulation rate and biological activity make them highly attractive candidates for future antibiotics.
We present the tool Kato which is, to the best of our knowledge, the first tool for plagiarism detection that is directly tailored for answer-set programming (ASP). Kato aims at finding similarities between (segments of) logic programs to help detecting cases of plagiarism. Currently, the tool is realised for DLV programs but it is designed to handle various logic-programming syntax versions. We review basic features and the underlying methodology of the tool.
Background: Network models are useful tools for researchers to simplify and understand investigated systems. Yet, the assessment of methods for network construction is often uncertain. Random resampling simulations can aid to assess methods, provided synthetic data exists for reliable network construction.
Objectives: We implemented a new Monte Carlo algorithm to create simulated data for network reconstruction, tested the influence of adjusted parameters and used simulations to select a method for network model estimation based on real-world data. We hypothesized, that reconstructs based on Monte Carlo data are scored at least as good compared to a benchmark.
Methods: Simulated data was generated in R using the Monte Carlo algorithm of the mcgraph package. Benchmark data was created by the huge package. Networks were reconstructed using six estimator functions and scored by four classification metrics. For compatibility tests of mean score differences, Welch’s t-test was used. Network model estimation based on real-world data was done by stepwise selection.
Samples: Simulated data was generated based on 640 input graphs of various types and sizes. The real-world dataset consisted of 67 medieval skeletons of females and males from the region of Refshale (Lolland) and Nordby (Jutland) in Denmark.
Results: Results after t-tests and determining confidence intervals (CI95%) show, that evaluation scores for network reconstructs based on the mcgraph package were at least as good compared to the benchmark huge. The results even indicate slightly better scores on average for the mcgraph package.
Conclusion: The results confirmed our objective and suggested that Monte Carlo data can keep up with the benchmark in the applied test framework. The algorithm offers the feature to use (weighted) un- and directed graphs and might be useful for assessing methods for network construction.
Injuries in professional soccer are a significant concern for teams, and they are caused amongst others by high training load. This cohort study describes the relationship between workload parameters and the occurrence of non-contact injuries, during weeks with high and low workload in professional soccer players throughout the season. Twenty-one professional soccer players aged 28.3 ± 3.9 yrs. who competed in the Iranian Persian Gulf Pro League participated in this 48-week study. The external load was monitored using global positioning system (GPS, GPSPORTS Systems Pty Ltd) and the type of injury was documented daily by the team's medical staff. Odds ratio (OR) and relative risk (RR) were calculated for non-contact injuries for high- and low-load weeks according to acute (AW), chronic (CW), acute to chronic workload ratio (ACWR), and AW variation (Δ-Acute) values. By using Poisson distribution, the interval between previous and new injuries were estimated. Overall, 12 non-contact injuries occurred during high load and 9 during low load weeks. Based on the variables ACWR and Δ-AW, there was a significantly increased risk of sustaining non-contact injuries (p < 0.05) during high-load weeks for ACWR (OR: 4.67), and Δ-AW (OR: 4.07). Finally, the expected time between injuries was significantly shorter in high load weeks for ACWR [1.25 vs. 3.33, rate ratio time (RRT)] and Δ-AW (1.33 vs. 3.45, RRT) respectively, compared to low load weeks. The risk of sustaining injuries was significantly larger during high workload weeks for ACWR, and Δ-AW compared with low workload weeks. The observed high OR in high load weeks indicate that there is a significant relationship between workload and occurrence of non-contact injuries. The predicted time to new injuries is shorter in high load weeks compared to low load weeks. Therefore, the frequency of injuries is higher during high load weeks for ACWR and Δ-AW. ACWR and Δ-AW appear to be good indicators for estimating the injury risk, and the time interval between injuries.
Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the Äspö Hard Rock Laboratory (Sweden).
I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1).
An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding.
To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the Äspö mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).
The optical spectrum of Eta Carinae (η Car) is prominent in H I, He i and Fe ii wind lines, all of which vary both in absorption and emission with phase. The phase dependance is a consequence of the interaction between the two objects in the η Car binary (η Car A & B). The binary system is enshrouded by ejecta from previous mass ejection events and consequently, η Car B is not directly observable. We have traced the He i lines over η Car’s spectroscopic period, using HST/STIS data obtained with medium spectral, but high angular, resolving power, and created a radial velocity curve for the system. The He I lines are formed in the core of the system, and appear to be a composite of multiple features formed in spatially separated regions. The sources of their irregular line profiles are still not fully understood, but can be attributed to emission/absorption near the wind-wind interface and/or a direct consequence of the η Car A’s, massive, clumpy wind. This paper will discuss the spectral variability, the narrow emission structure of the He i lines and how clumpiness of the winds may impede the construction of the reliable radial velocity curve, necessary for characterizations of especially η Car B.
Improvement of a fluorescence immunoassay with a compact diode-pumped solid state laser at 315 nm
(2006)
We demonstrate the improvement of fluorescence immunoassay (FIA) diagnostics in deploying a newly developed compact diode-pumped solid state (DPSS) laser with emission at 315 nm. The laser is based on the quasi-three-level transition in Nd:YAG at 946 nm. The pulsed operation is either realized by an active Q-switch using an electro-optical device or by introduction of a Cr<SUP>4+</SUP>:YAG saturable absorber as passive Q-switch element. By extra-cavity second harmonic generation in different nonlinear crystal media we obtained blue light at 473 nm. Subsequent mixing of the fundamental and the second harmonic in a β-barium-borate crystal provided pulsed emission at 315 nm with up to 20 μJ maximum pulse energy and 17 ns pulse duration. Substitution of a nitrogen laser in a FIA diagnostics system by the DPSS laser succeeded in considerable improvement of the detection limit. Despite significantly lower pulse energies (7 μJ DPSS laser versus 150 μJ nitrogen laser), in preliminary investigations the limit of detection was reduced by a factor of three for a typical FIA.
Background
Wearables, as small portable computer systems worn on the body, can track user fitness and health data, which can be used to customize health insurance contributions individually. In particular, insured individuals with a healthy lifestyle can receive a reduction of their contributions to be paid. However, this potential is hardly used in practice.
Objective
This study aims to identify which barrier factors impede the usage of wearables for assessing individual risk scores for health insurances, despite its technological feasibility, and to rank these barriers according to their relevance.
Methods
To reach these goals, we conduct a ranking-type Delphi study with the following three stages. First, we collected possible barrier factors from a panel of 16 experts and consolidated them to a list of 11 barrier categories. Second, the panel was asked to rank them regarding their relevance. Third, to enhance the panel consensus, the ranking was revealed to the experts, who were then asked to re-rank the barriers.
Results
The results suggest that regulation is the most important barrier. Other relevant barriers are false or inaccurate measurements and application errors caused by the users. Additionally, insurers could lack the required technological competence to use the wearable data appropriately.
Conclusion
A wider use of wearables and health apps could be achieved through regulatory modifications, especially regarding privacy issues. Even after assuring stricter regulations, users’ privacy concerns could partly remain, if the data exchange between wearables manufacturers, health app providers, and health insurers does not become more transparent.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.
Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1–500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
Expanding public or publicly subsidized childcare has been a top social policy priority in many industrialized countries. It is supposed to increase fertility, promote children’s development and enhance mothers’ labor market attachment. In this paper, we analyze the causal effect of one of the largest expansions of subsidized childcare for children up to three years among industrialized countries on the employment of mothers in Germany. Identification is based on spatial and temporal variation in the expansion of publicly subsidized childcare triggered by two comprehensive childcare policy reforms. The empirical analysis is based on the German Microcensus that is matched to county level data on childcare availability. Based on our preferred specification which includes time and county fixed effects we find that an increase in childcare slots by one percentage point increases mothers’ labor market participation rate by 0.2 percentage points. The overall increase in employment is explained by the rise in part-time employment with relatively long hours (20-35 hours per week). We do not find a change in full-time employment or lower part-time employment that is causally related to the childcare expansion. The effect is almost entirely driven by mothers with medium-level qualifications. Mothers with low education levels do not profit from this reform calling for a stronger policy focus on particularly disadvantaged groups in coming years.
The widespread usage of products containing volatile organic compounds (VOC) has lead to a general human exposure to these chemicals in work places or homes being suspected to contribute to the growing incidence of environmental diseases. Since the causal molecular mechanisms for the development of these disorders are not completely understood, the overall objective of this thesis was to investigate VOC-mediated molecular effects on human lung cells in vitro at VOC concentrations comparable to exposure scenarios below current occupational limits. Although differential expression of single proteins in response to VOCs has been reported, effects on complex protein networks (proteome) have not been investigated. However, this information is indispensable when trying to ascertain a mechanism for VOC action on the cellular level and establishing preventive strategies. For this study, the alveolar epithelial cell line A549 has been used. This cell line, cultured in a two-phase (air/liquid) model allows the most direct exposure and had been successfully applied for the analysis of inflammatory effects in response to VOCs. Mass spectrometric identification of 266 protein spots provided the first proteomic map of A549 cell line to this extent that may foster future work with this frequently used cellular model. The distribution of three typical air contaminants, monochlorobenzene (CB), styrene and 1,2 dichlorobenzene (1,2-DCB), between gas and liquid phase of the exposure model has been analyzed by gas chromatography. The obtained VOC partitioning was in agreement with available literature data. Subsequently the adapted in vitro system has been successfully employed to characterize the effects of the aromatic compound styrene on the proteome of A549 cells (Chapter 4). Initially, the cell toxicity has been assessed in order to ensure that most of the concentrations used in the following proteomic approach were not cytotoxic. Significant changes in abundance and phosphorylation in the total soluble protein fraction of A549 cells have been detected following styrene exposure. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. Validation experiments on protein and transcript level confirmed the results of the 2-DE experiments. From the results, two main cellular pathways have been identified that were induced by styrene: the cellular oxidative stress response combined with moderate pro-apoptotic signaling. Measurement of cellular reactive oxygen species (ROS) as well as the styrene-mediated induction of oxidative stress marker proteins confirmed the hypothesis of oxidative stress as the main molecular response mechanism. Finally, adducts of cellular proteins with the reactive styrene metabolite styrene 7,8 oxide (SO) have been identified. Especially the SO-adducts observed at both the reactive centers of thioredoxin reductase 1, which is a key element in the control of the cellular redox state, may be involved in styrene-induced ROS formation and apoptosis. A similar proteomic approach has been carried out with the halobenzenes CB and 1,2-DCB (Chapter 5). In accordance with previous findings, cell toxicity assessment showed enhanced toxicity compared to the one caused by styrene. Significant changes in abundance and phosphorylation of total soluble proteins of A549 cells have been detected following exposure to subtoxic concentrations of CB and 1,2-DCB. All proteins have been identified using mass spectrometry and the main cellular functions have been assigned. As for the styrene experiment, the results indicated two main pathways to be affected in the presence of chlorinated benzenes, cell death signaling and oxidative stress response. The strong induction of pro-apoptotic signaling has been confirmed for both treatments by detection of the cleavage of caspase 3. Likewise, the induction of redox-sensitive protein species could be correlated to an increased cellular level of ROS observed following CB treatment. Finally, common mechanisms in the cellular response to aromatic VOCs have been investigated (Chapter 6). A similar number (4.6-6.9%) of all quantified protein spots showed differential expression (p<0.05) following cell exposure to styrene, CB or 1,2-DCB. However, not more than three protein spots showed significant regulation in the same direction for all three volatile compounds: voltage-dependent anion-selective channel protein 2, peroxiredoxin 1 and elongation factor 2. However, all of these proteins are important molecular targets in stress- and cell death-related signaling pathways.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Whilst providing a framework for learning and scientific emancipation, a proposal writing training is confronted with various organisational and didactic challenges, which influence the achievement of the set training objectives. Based on observations made during the workshops for proposal writing organised in Kinshasa, Democratic Republic of Congo, as part of the NMT Programme, the article raises two main questions: (a) How could these challenges be overcome and successfully addressed in the training? (b) What is the level of learning outcomes of the participants at the end of the training? The article shows that the success of the training lays in the relevance of the employed training approaches. The use of a participatory approach encouraged constructive exchanges between participants, trainers, and experts, and enabled all participants to finalise coherent projects to apply for national and international funding.
Background
The association between bivariate variables may not necessarily be homogeneous throughout the whole range of the variables. We present a new technique to describe inhomogeneity in the association of bivariate variables.
Methods
We consider the correlation of two normally distributed random variables. The 45° diagonal through the origin of coordinates represents the line on which all points would lie if the two variables completely agreed. If the two variables do not completely agree, the points will scatter on both sides of the diagonal and form a cloud. In case of a high association between the variables, the band width of this cloud will be narrow, in case of a low association, the band width will be wide. The band width directly relates to the magnitude of the correlation coefficient. We then determine the Euclidean distances between the diagonal and each point of the bivariate correlation, and rotate the coordinate system clockwise by 45°. The standard deviation of all Euclidean distances, named “global standard deviation”, reflects the band width of all points along the former diagonal. Calculating moving averages of the standard deviation along the former diagonal results in “locally structured standard deviations” and reflect patterns of “locally structured correlations (LSC)”. LSC highlight inhomogeneity of bivariate correlations. We exemplify this technique by analyzing the association between body mass index (BMI) and hip circumference (HC) in 6313 healthy East German adults aged 18 to 70 years.
Results
The correlation between BMI and HC in healthy adults is not homogeneous. LSC is able to identify regions where the predictive power of the bivariate correlation between BMI and HC increases or decreases, and highlights in our example that slim people have a higher association between BMI and HC than obese people.
Conclusion
Locally structured correlations (LSC) identify regions of higher or lower than average correlation between two normally distributed variables.
Background: Clinicians often refer anthropometric measures of a child to so-called “growth standards” and “growth references. Over 140 countries have meanwhile adopted WHO growth standards.
Objectives: The present study was conducted to thoroughly examine the idea of growth standards as a common yardstick for all populations. Weight depends on height. We became interested in whether also weight-for-height depends on height. First, we studied the age-group effect on weight-for-height. Thereafter, we tested the applicability of weight-for-height references in short and in historic populations.
Sample and Methods: We analyzed body height and body weight and weight-for-height of 3795 healthy boys and 3726 healthy girls aged 2 to 5 years measured in East-Germany between 1986 and 1990.
We chose contemporary height and weight charts from Germany, the UK, and the WHO growth chart and compared these with three geographically commensurable growth charts from the end of the 19th century.
Results: We analyzed body height and body weight and weight-for-height of 3795 healthy boys and 3726 healthy girls aged 2 to 5 years measured in East-Germany between 1986 and 1990.
We chose contemporary height and weight charts from Germany, the UK, and the WHO growth chart and compared these with three geographically commensurable growth charts of the end of the 19th century.
Conclusion: Weight-for-height depends on age and sex and apart from the nutritional state, reflects body proportion and body built particularly during infancy and early childhood. Populations with a relatively short average height are prone to high values of weight-for-height for arithmetic reasons independent of the nutritional state.
The highly conserved protein complex containing the Target of Rapamycin (TOR) kinase is known to integrate intra- and extra-cellular stimuli controlling nutrient allocation and cellular growth. This thesis describes three studies aimed to understand how TOR signaling pathway influences carbon and nitrogen metabolism in Chlamydomonas reinhardtii. The first study presents a time-resolved analysis of the molecular and physiological features across the diurnal cycle. The inhibition of TOR leads to 50% reduction in growth followed by nonlinear delays in the cell cycle progression. The metabolomics analysis showed that the growth repression is mainly driven by differential carbon partitioning between anabolic and catabolic processes. Furthermore, the high accumulation of nitrogen-containing compounds indicated that TOR kinase controls the carbon to nitrogen balance of the cell, which is responsible for biomass accumulation, growth and cell cycle progression. In the second study the cause of the high accumulation of amino acids is explained. For this purpose, the effect of TOR inhibition on Chlamydomonas was examined under different growth regimes using stable 13C- and 15N-isotope labeling. The data clearly showed that an increased nitrogen uptake is induced within minutes after the inhibition of TOR. Interestingly, this increased N-influx is accompanied by increased activities of nitrogen assimilating enzymes. Accordingly, it was concluded that TOR inhibition induces de-novo amino acid synthesis in Chlamydomonas. The recognition of this novel process opened an array of questions regarding potential links between central metabolism and TOR signaling. Therefore a detailed phosphoproteomics study was conducted to identify the potential substrates of TOR pathway regulating central metabolism. Interestingly, some of the key enzymes involved in carbon metabolism as well as amino acid synthesis exhibited significant changes in the phosphosite intensities immediately after TOR inhibition. Altogether, these studies provide a) detailed insights to metabolic response of Chlamydomonas to TOR inhibition, b) identification of a novel process causing rapid upshifts in amino acid levels upon TOR inhibition and c) finally highlight potential targets of TOR signaling regulating changes in central metabolism. Further biochemical and molecular investigations could confirm these observations and advance the understanding of growth signaling in microalgae.
The study examined the potential future changes of drought characteristics in the Greater Lake Malawi Basin in Southeast Africa. This region strongly depends on water resources to generate electricity and food. Future projections (considering both moderate and high emission scenarios) of temperature and precipitation from an ensemble of 16 bias-corrected climate model combinations were blended with a scenario-neutral response surface approach to analyses changes in: (i) the meteorological conditions, (ii) the meteorological water balance, and (iii) selected drought characteristics such as drought intensity, drought months, and drought events, which were derived from the Standardized Precipitation and Evapotranspiration Index. Changes were analyzed for a near-term (2021–2050) and far-term period (2071–2100) with reference to 1976–2005. The effect of bias-correction (i.e., empirical quantile mapping) on the ability of the climate model ensemble to reproduce observed drought characteristics as compared to raw climate projections was also investigated. Results suggest that the bias-correction improves the climate models in terms of reproducing temperature and precipitation statistics but not drought characteristics. Still, despite the differences in the internal structures and uncertainties that exist among the climate models, they all agree on an increase of meteorological droughts in the future in terms of higher drought intensity and longer events. Drought intensity is projected to increase between +25 and +50% during 2021–2050 and between +131 and +388% during 2071–2100. This translates into +3 to +5, and +7 to +8 more drought months per year during both periods, respectively. With longer lasting drought events, the number of drought events decreases. Projected droughts based on the high emission scenario are 1.7 times more severe than droughts based on the moderate scenario. That means that droughts in this region will likely become more severe in the coming decades. Despite the inherent high uncertainties of climate projections, the results provide a basis in planning and (water-)managing activities for climate change adaptation measures in Malawi. This is of particular relevance for water management issues referring hydro power generation and food production, both for rain-fed and irrigated agriculture.
Hα observations of Rigel obtained on 184 nights during the past ten years with the 1-m telescope and ´echelle spectrograph of Ritter Observatory are surveyed. The line profiles were classified in terms of morphology. About 1/4 of them are of P Cygni type, about 15% inverse P Cygni, about 25% double-peaked, about 1/3 pure absorption, and a few are single emission lines. Transformation of the profile from one type to another typically takes a few days. Although the line stays in absorption for extended intervals, only one high-velocity absorption event of the intensity reported by Kaufer et al. (1996a) was observed, in late 2006. Late in this event, Hα absorption occurred farther to the red than the red wing of a plausible photospheric absorption component, an indication of infalling material. In general, as the absorption events come to an end, the emission typically returns with an inverse P Cygni profile. The Hα profile class shows no obvious correlation with the radial velocity of C II λ6578, a photospheric absorption line.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
General Discussion
(2007)
In the old days (pre ∼1990) hot stellar winds were assumed to be smooth, which made life fairly easy and bothered no one. Then after suspicious behaviour had been revealed, e.g. stochastic temporal variability in broadband polarimetry of single hot stars, it took the emerging CCD technology developed in the preceding decades (∼1970-80’s) to reveal that these winds were far from smooth. It was mainly high-S/N, time-dependent spectroscopy of strong optical recombination emission lines in WR, and also a few OB and other stars with strong hot winds, that indicated all hot stellar winds likely to be pervaded by thousands of multiscale (compressible supersonic turbulent?) structures, whose driver is probably some kind of radiative instability. Quantitative estimates of clumping-independent mass-loss rates came from various fronts, mainly dependent directly on density (e.g. electron-scattering wings of emission lines, UV spectroscopy of weak resonance lines, and binary-star properties including orbital-period changes, electron-scattering, and X-ray fluxes from colliding winds) rather than the more common, easier-to-obtain but clumping-dependent density-squared diagnostics (e.g. free-free emission in the IR/radio and recombination lines, of which the favourite has always been Hα). Many big questions still remain, such as: What do the clumps really look like? Do clumping properties change as one recedes from the mother star? Is clumping universal? Does the relative clumping correction depend on $\dot{M}$ itself?
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
The end of culture?
(2000)
Parafoveal Load of Word N+1 Modulates Preprocessing Effectivenessof Word N+2 in Chinese Reading
(2010)
Preview benefits (PBs) from two words to the right of the fixated one (i.e., word N+2)and associated parafoveal-on-foveal effects are critical for proposals of distributed lexical processing during reading. This experiment examined parafoveal processing during reading of Chinese sentences, using a boundary manipulation of N+2-word preview with low- and high-frequency words N+1. The main findings were (a) an identity PB for word N+2 that was (b) primarily observed when word N+1 was of high frequency (i.e., an interaction between frequency of word N+1 and PB for word N+2), and (c) a parafoveal-on-foveal frequency effect of word N+1 for fixation durations on word N. We discuss implications for theories of serial attention shifts and parallel distributed processing of words during reading.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
In numerical processing, the functional role of Spatial-Numerical Associations (SNAs, such as the association of smaller numbers with left space and larger numbers with right space, the Mental Number Line hypothesis) is debated. Most studies demonstrate SNAs with lateralized responses, and there is little evidence that SNAs appear when no response is required. We recorded passive holding grip forces in no-go trials during number processing. In Experiment 1, participants performed a surface numerical decision task (“Is it a number or a letter?”). In Experiment 2, we used a deeper semantic task (“Is this number larger or smaller than five?”). Despite instruction to keep their grip force constant, participants' spontaneous grip force changed in both experiments: Smaller numbers led to larger force increase in the left than in the right hand in the numerical decision task (500–700 ms after stimulus onset). In the semantic task, smaller numbers again led to larger force increase in the left hand, and larger numbers increased the right-hand holding force. This effect appeared earlier (180 ms) and lasted longer (until 580 ms after stimulus onset). This is the first demonstration of SNAs with passive holding force. Our result suggests that (1) explicit motor response is not a prerequisite for SNAs to appear, and (2) the timing and strength of SNAs are task-dependent. (216 words).
Science education researchers have developed a refined understanding of the structure of science teachers’ pedagogical content knowledge (PCK), but how to develop applicable and situation-adequate PCK remains largely unclear. A potential problem lies in the diverse conceptualisations of the PCK used in PCK research. This study sought to systematize existing science education research on PCK through the lens of the recently proposed refined consensus model (RCM) of PCK. In this review, the studies’ approaches to investigating PCK and selected findings were characterised and synthesised as an overview comparing research before and after the publication of the RCM. We found that the studies largely employed a qualitative case-study methodology that included specific PCK models and tools. However, in recent years, the studies focused increasingly on quantitative aspects. Furthermore, results of the reviewed studies can mostly be integrated into the RCM. We argue that the RCM can function as a meaningful theoretical lens for conceptualizing links between teaching practice and PCK development by proposing pedagogical reasoning as a mechanism and/or explanation for PCK development in the context of teaching practice.
One aspect of achieving a more sustainable chemical industry is the minimization of the usage of solvents and chemicals. Thus, optimization and development of chemical processes for large-scale production is favourably performed in small batches. The critical step in this approach is upscaling the batches from the small reaction systems to the large reactors mandatory for cost efficient production in an industrial environment. Scaling up the bulk volume always goes along with increasing the surface where the reaction medium is in contact with the confining vessel. Since volume scales proportional with the cubic dimension while the surface scales quadratic, their ratio is size-dependent. The influence of reaction vessel walls can change the reaction performance. A number of phenomena occurring at the surface-liquid interface can affect reaction rates and yields, resulting in possible difficulties in predicting and extrapolating from small size production scale to large industrial processes. The application of levitated droplets as a containerless reaction vessels provides a promising possibility to avoid the above-mentioned issues.
In the presented work, an efficient coupling of acoustically levitated droplets to an ion mobility (IM) spectrometer, operating at ambient conditions, was designed for real-time monitoring of chemical reactions. The design of the system comprises noncontact sampling and ionization of the droplet realised by laser desorption/ionization at 2,94 µm. The scope of the work includes fundamental studies covering understanding of laser irradiation of droplets enclosed in an acoustical field. Understanding of this phenomenon is crucial to comprehending the effects of temporal and spatial resolution of the generated ion plume that influence the resolution of the system.
The set-up includes an acoustic trap, laser irradiation and ion manipulation electrostatic lenses operating at high voltage at ambient pressure. The complexity of the design needs to fully be considered for an effective ion transfer at the interface region between the levitated droplet and IM spectrometer. For sampling and ionization, two distinct laser pulse lengths were evaluated, ns and µs. Irradiation via µs laser pulses provides several advantages: i) the droplet volume is not extensively impinged, as in case of ns laser pulses, allowing the sampling of only the small volume of the droplet; ii) the lower fluence results in less pronounced oscillations of the droplet confined in the acoustic field. The droplet will not be dissipated out of the acoustic field leading to loss of the sample; iii) the mild laser irradiation results in better spatial and temporal ion plume confinement, leading to better resolution of the detected ion packets. Finally, this knowledge allows the application of ion optics necessary to induce ion flow between the droplet suspended in the acoustic field and the IM spectrometer. The ion optics, composed of 2 electrostatic lenses placed in the near vicinity of the droplet, allow effective focusing of the ion plume and its redirection directly to the IM spectrometer entrance. This novel coupling has proved to be successful for detection of some simple molecules ionizable at the 2.94 µm wavelength. To further demonstrate the applicability of the system, a proof-of-principle reaction was selected, fulfilling the requirements of the system, and was subjected to comprehensive investigation of its performance. Herein, the reaction between N-Boc cysteine methyl ester and allyl alcohol has been performed in a batch reactor and on-line monitored via 1H NMR to establish reaction propagation. With the additional assessment, it was confirmed that the thiol-ene coupling can be performed within first 20 minutes of the irradiation with a reaction yield above 50%, proving that the reaction can be applied as a study case to assess the possibilities of the developed system.
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing.
As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface.
The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended.
I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model.
The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks.
The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful.
However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis.
Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes.
Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds.
Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis.
Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes.
Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction.
Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.
The paper investigates the question of sustainability of capacity building initiatives by reporting about the multiplication training in the frame of DIES NMT Programme on quality assurance in Uganda and how it could make use of the social capital within the existing quality assurance network to sustain and address challenges during its implementation. The purpose of the article is to explore the nature of networking (social and institutional) which was established by the Ugandan Universities Quality Assurance Forum (UUQAF) and share the strategies used in this training experience for future sustainable capacity building training initiatives in emerging economies. The paper employed a qualitative research method to describe and analyse the training framework based on primary and secondary documents.
On Track to Success?
(2022)
Many countries consider expanding vocational curricula in secondary education to boost skills and labour market outcomes among non-university-bound students. However, critics fear this could divert other students from more profitable academic education. We study labour market returns to vocational education in England, where until recently students chose between a vocational track, an academic track and quitting education at age 16. Identification is challenging because self-selection is strong and because students’ next-best alternatives are unknown. Against this back- drop, we leverage multiple instrumental variables to estimate margin-specific treatment effects, i.e., causal returns to vocational education for students at the margin with academic education and, separately, for students at the margin with quitting education. Identification comes from variation in distance to the nearest vocational provider conditional on distance to the nearest academic provider (and vice-versa), while controlling for granular student, school and neighbourhood characteristics. The analysis is based on population-wide administrative education data linked to tax records. We find that the vast majority of marginal vocational students are indifferent be- tween vocational and academic education. For them, vocational enrolment substantially decreases earnings at age 30. This earnings penalty grows with age and is due to wages, not employment. However, consistent with comparative advantage, the penalty is smaller for students with higher revealed preferences for the vocational track. For the few students at the margin with no further education, we find merely tentative evidence of increased employment and earnings from vocational enrolment.
We review the effects of clumping on the profiles of resonance doublets. By allowing the ratio of the doublet oscillator strenghts to be a free parameter, we demonstrate that doublet profiles contain more information than is normally utilized. In clumped (or porous) winds, this ratio can lies between unity and the ratio of the f-values, and can change as a function of velocity and time, depending on the fraction of the stellar disk that is covered by material moving at a particular velocity at a given moment. Using these insights, we present the results of SEI modeling of a sample of B supergiants, ζ Pup and a time series for a star whose terminal velocity is low enough to make the components of its Si VIλλ1400 independent. These results are interpreted within the framewrok of the Oskinova et al. (2007) model, and demonstrate how the doublet profiles can be used to extract infromation about wind structure.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
In the context of ecological risk assessment of chemicals, individual-based population models hold great potential to increase the ecological realism of current regulatory risk assessment procedures. However, developing and parameterizing such models is time-consuming and often ad hoc. Using standardized, tested submodels of individual organisms would make individual-based modelling more efficient and coherent. In this thesis, I explored whether Dynamic Energy Budget (DEB) theory is suitable for being used as a standard submodel in individual-based models, both for ecological risk assessment and theoretical population ecology. First, I developed a generic implementation of DEB theory in an individual-based modeling (IBM) context: DEB-IBM. Using the DEB-IBM framework I tested the ability of the DEB theory to predict population-level dynamics from the properties of individuals. We used Daphnia magna as a model species, where data at the individual level was available to parameterize the model, and population-level predictions were compared against independent data from controlled population experiments. We found that DEB theory successfully predicted population growth rates and peak densities of experimental Daphnia populations in multiple experimental settings, but failed to capture the decline phase, when the available food per Daphnia was low. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detecting gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology. In addition to theoretical explorations, we tested the potential of DEB theory combined with IBMs to extrapolate effects of chemical stress from the individual to population level. For this we used information at the individual level on the effect of 3,4-dichloroanailine on Daphnia. The individual data suggested direct effects on reproduction but no significant effects on growth. Assuming such direct effects on reproduction, the model was able to accurately predict the population response to increasing concentrations of 3,4-dichloroaniline. We conclude that DEB theory combined with IBMs holds great potential for standardized ecological risk assessment based on ecological models.
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
In late summer, migratory bats of the temperate zone face the challenge of accomplishing two energy-demanding tasks almost at the same time: migration and mating. Both require information and involve search efforts, such as localizing prey or finding potential mates. In non-migrating bat species, playback studies showed that listening to vocalizations of other bats, both con-and heterospecifics, may help a recipient bat to find foraging patches and mating sites. However, we are still unaware of the degree to which migrating bats depend on con-or heterospecific vocalizations for identifying potential feeding or mating opportunities during nightly transit flights. Here, we investigated the vocal responses of Nathusius’ pipistrelle bats, Pipistrellus nathusii, to simulated feeding and courtship aggregations at a coastal migration corridor. We presented migrating bats either feeding buzzes or courtship calls of their own or a heterospecific migratory species, the common noctule, Nyctalus noctula. We expected that during migratory transit flights, simulated feeding opportunities would be particularly attractive to bats, as well as simulated mating opportunities which may indicate suitable roosts for a stopover. However, we found that when compared to the natural silence of both pre-and post-playback phases, bats called indifferently during the playback of conspecific feeding sounds, whereas P. nathusii echolocation call activity increased during simulated feeding of N. noctula. In contrast, the call activity of P. nathusii decreased during the playback of conspecific courtship calls, while no response could be detected when heterospecific call types were broadcasted. Our results suggest that while on migratory transits, P. nathusii circumnavigate conspecific mating aggregations, possibly to save time or to reduce the risks associated with social interactions where aggression due to territoriality might be expected. This avoidance behavior could be a result of optimization strategies by P. nathusii when performing long-distance migratory flights, and it could also explain the lack of a response to simulated conspecific feeding. However, the observed increase of activity in response to simulated feeding of N. noctula, suggests that P. nathusii individuals may be eavesdropping on other aerial hawking insectivorous species during migration, especially if these occupy a slightly different foraging niche.
Starting in 2009, the German state of Saxony distributed sports club membership vouchers among all 33,000 third graders in the state. The policy’s objective was to encourage them to develop a long-term habit of exercising. In 2018, we carried out a large register-based survey among several cohorts in Saxony and two neighboring states. Our difference-in-differences estimations show that, even after a decade, awareness of the voucher program was significantly higher in the treatment group. We also find that youth received and redeemed the vouchers. However, we do not find significant short- or long-term effects on sports club membership, physical activity, overweightness, or motor skills.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
While the concept of transitional justice and its range of measures have gained importance on an international level to come to terms with major crimes of the past, colonial crimes and mass violence committed by Western actors have not been addressed by transitional justice so far. In this chapter, the Herero’s and Nama’s struggle for justice for the genocide on their ancestors by Germany from 1904 – 1908 and the arising challenges are set in relation to conceptual debates in the field of transitional justice. Building on current debates in the field, suggesting more structural and transformative conceptualizations of transitional justice and an approach ‘from below’, it is argued that decolonial activism of formerly colonized communities and transitional justice debates can inform each other in a dialogic and fruitful form to formulate suggestions for a process towards post-colonial justice.
Linguistic and psycholinguistic accounts based on the study of English may prove unreliable as guides to sentence processing in even closely related languages. The present study illustrates this claim in a test of sentence interpretation by German-, Italian-, and English-speaking adults. Subjects were presented with simple transitive sentences in which contrasts of (1) word order, (2) agreement, (3) animacy, and (4) stress were systematically varied. For each sentence, subjects were asked to state which of the two nouns was the actor. The results indicated that Americans relied overwhelming on word order, using a first-noun strategy in NVN and a second-noun strategy in VNN and NNV sentences. Germans relied on both agreement and animacy. Italians showed extreme reliance on agreement cues. In both German and Italian, stress played a role in terms of complex interactions with word order and agreement. The findings were interpreted in terms of the “competition model” of Bates and MacWhinney (in H. Winitz (Ed.), Annals of the New York Academy of Sciences Conference on Native and Foreign Language Acquisition. New York: New York Academy of Sciences, 1982) in which cue validity is considered to be the primary determinant of cue strength. According to this model, cues are said to be high in validity when they are also high in applicability and reliability.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
Objective: There is a lack of brief rating scales for the reliable assessment of psychotherapeutic skills, which do not require intensive rater training and/or a high level of expertise. Thus, the objective is to validate a 14-item version of the Clinical Communication Skills Scale (CCSS-S).
Methods: Using a sample of N = 690 video-based ratings of role-plays with simulated patients, we calculated a confirmatory factor analysis and an exploratory structural equation modeling (ESEM), assessed convergent validities, determined inter-rater reliabilities and compared these with those who were either psychology students, advanced psychotherapy trainees, or experts.
Results: Correlations with other competence rating scales were high (rs > 0.86–0.89). The intraclass correlations ranged between moderate and good [ICC(2,2) = 0.65–0.80], with student raters yielding the lowest scores. The one-factor model only marginally replicated the data, but the internal consistencies were excellent (α = 0.91–95). The ESEM yielded a two-factor solution (Collaboration and Structuring and Exploration Skills).
Conclusion: The CCSS-S is a brief and valid rating scale that reliably assesses basic communication skills, which is particularly useful for psychotherapy training using standardized role-plays. To ensure good inter-rater reliabilities, it is still advisable to employ raters with at least some clinical experience. Future studies should further investigate the one- or two-factor structure of the instrument.
At different times and places, civic engagement in nonviolent resistance (NVR) has repeatedly shown to be an effective tool in times of conflict to initiate societal change from below. History teaches us that there have been successes (Mahatma Gandhi in India) and failures (the Tiananmen Square protests in China).
Along with the recognition of the duality between transformative potential and stark consequences, the historical development of NVR was accompanied by the emergence of scholarly debate, fractured along disputes around purpose, character and effectivity of nonviolent actions taken by civil society stakeholders engaged in making their voices heard. One of the field’s current points of interest is the examination of the long-term effects of NVR movements resulting in societal transformation on the stability and adequacy of a subsequently altered or emerging democracy, suggesting that NVR contributes positively to the sustainable and representative design of an egalitarian governing system.
The conclusion of the Nepalese civil war in 2006 should pose as an unambiguous example for the illustration of this phenomenon, but simultaneously raises the question why there was no successful implementation of a transitional process focusing on the needs of the victims.
This paper describes the standardization problems that come up in a diachronic corpus: it has to cope with differing standards with regard to diplomaticity, annotation, and header information. Such highly het-erogeneous texts must be standardized to allow for comparative re-search without (too much) loss of information.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
Two examples of our biophotonic research utilizing nanoparticles are presented, namely laser-based fluoroimmuno analysis and in-vivo optical oxygen monitoring. Results of the work include significantly enhanced sensitivity of a homogeneous fluorescence immunoassay and markedly improved spatial resolution of oxygen gradients in root nodules of a legume species.
Collisions of black holes and neutron stars, named mixed binaries in the following, are interesting because of at least two reasons. Firstly, it is expected that they emit a large amount of energy as gravitational waves, which could be measured by new detectors. The form of those waves is expected to carry information about the internal structure of such systems. Secondly, collisions of such objects are the prime suspects of short gamma ray bursts. The exact mechanism for the energy emission is unknown so far. In the past, Newtonian theory of gravitation and modifications to it were often used for numerical simulations of collisions of mixed binary systems. However, near to such objects, the gravitational forces are so strong, that the use of General Relativity is necessary for accurate predictions. There are a lot of problems in general relativistic simulations. However, systems of two neutron stars and systems of two black holes have been studies extensively in the past and a lot of those problems have been solved. One of the remaining problems so far has been the use of hydrodynamic on excision boundaries. Inside excision regions, no evolution is carried out. Such regions are often used inside black holes to circumvent instabilities of the numerical methods near the singularity. Methods to handle hydrodynamics at such boundaries have been described and tests are shown in this work. One important test and the first application of those methods has been the simulation of a collapsing neutron star to a black hole. The success of these simulations and in particular the performance of the excision methods was an important step towards simulations of mixed binaries. Initial data are necessary for every numerical simulation. However, the creation of such initial data for general relativistic situations is in general very complicated. In this work it is shown how to obtain initial data for mixed binary systems using an already existing method for initial data of two black holes. These initial data have been used for evolutions of such systems and problems encountered are discussed in this work. One of the problems are instabilities due to different methods, which could be solved by dissipation of appropriate strength. Another problem is the expected drift of the black hole towards the neutron star. It is shown, that this can be solved by using special gauge conditions, which prevent the black hole from moving on the computational grid. The methods and simulations shown in this work are only the starting step for a much more detailed study of mixed binary system. Better methods, models and simulations with higher resolution and even better gauge conditions will be focus of future work. It is expected that such detailed studies can give information about the emitted gravitational waves, which is important in view of the newly built gravitational wave detectors. In addition, these simulations could give insight into the processes responsible for short gamma ray bursts.
The natural abundance of Coiled Coil (CC) motifs in cytoskeleton and extracellular matrix proteins suggests that CCs play an important role as passive (structural) and active (regulatory) mechanical building blocks. CCs are self-assembled superhelical structures consisting of 2-7 α-helices. Self-assembly is driven by hydrophobic and ionic interactions, while the helix propensity of the individual helices contributes additional stability to the structure. As a direct result of this simple sequence-structure relationship, CCs serve as templates for protein design and sequences with a pre-defined thermodynamic stability have been synthesized de novo. Despite this quickly increasing knowledge and the vast number of possible CC applications, the mechanical function of CCs has been largely overlooked and little is known about how different CC design parameters determine the mechanical stability of CCs. Once available, this knowledge will open up new applications for CCs as nanomechanical building blocks, e.g. in biomaterials and nanobiotechnology.
With the goal of shedding light on the sequence-structure-mechanics relationship of CCs, a well-characterized heterodimeric CC was utilized as a model system. The sequence of this model system was systematically modified to investigate how different design parameters affect the CC response when the force is applied to opposing termini in a shear geometry or separated in a zipper-like fashion from the same termini (unzip geometry). The force was applied using an atomic force microscope set-up and dynamic single-molecule force spectroscopy was performed to determine the rupture forces and energy landscape properties of the CC heterodimers under study. Using force as a denaturant, CC chain separation is initiated by helix uncoiling from the force application points. In the shear geometry, this allows uncoiling-assisted sliding parallel to the force vector or dissociation perpendicular to the force vector. Both competing processes involve the opening of stabilizing hydrophobic (and ionic) interactions. Also in the unzip geometry, helix uncoiling precedes the rupture of hydrophobic contacts.
In a first series of experiments, the focus was placed on canonical modifications in the hydrophobic core and the helix propensity. Using the shear geometry, it was shown that both a reduced core packing and helix propensity lower the thermodynamic and mechanical stability of the CC; however, with different effects on the energy landscape of the system. A less tightly packed hydrophobic core increases the distance to the transition state, with only a small effect on the barrier height. This originates from a more dynamic and less tightly packed core, which provides more degrees of freedom to respond to the applied force in the direction of the force vector. In contrast, a reduced helix propensity decreases both the distance to the transition state and the barrier height. The helices are ‘easier’ to unfold and the remaining structure is less thermodynamically stable so that dissociation perpendicular to the force axis can occur at smaller deformations.
Having elucidated how canonical sequence modifications influence CC mechanics, the pulling geometry was investigated in the next step. Using one and the same sequence, the force application points were exchanged and two different shear and one unzipping geometry were compared. It was shown that the pulling geometry determines the mechanical stability of the CC. Different rupture forces were observed in the different shear as well as in the unzipping geometries, suggesting that chain separation follows different pathways on the energy landscape. Whereas the difference between CC shearing and unzipping was anticipated and has also been observed for other biological structures, the observed difference for the two shear geometries was less expected. It can be explained with the structural asymmetry of the CC heterodimer. It is proposed that the direction of the α-helices, the different local helix propensities and the position of a polar asparagine in the hydrophobic core are responsible for the observed difference in the chain separation pathways. In combination, these factors are considered to influence the interplay between processes parallel and perpendicular to the force axis.
To obtain more detailed insights into the role of helix stability, helical turns were reinforced locally using artificial constraints in the form of covalent and dynamic ‘staples’. A covalent staple bridges to adjacent helical turns, thus protecting them against uncoiling. The staple was inserted directly at the point of force application in one helix or in the same terminus of the other helix, which did not experience the force directly. It was shown that preventing helix uncoiling at the point of force application reduces the distance to the transition state while slightly increasing the barrier height. This confirms that helix uncoiling is critically important for CC chain separation. When inserted into the second helix, this stabilizing effect is transferred across the hydrophobic core and protects the force-loaded turns against uncoiling. If both helices were stapled, no additional increase in mechanical stability was observed. When replacing the covalent staple with a dynamic metal-coordination bond, a smaller decrease in the distance to the transition was observed, suggesting that the staple opens up while the CC is under load.
Using fluorinated amino acids as another type of non-natural modification, it was investigated how the enhanced hydrophobicity and the altered packing at the interface influences CC mechanics. The fluorinated amino acid was inserted into one central heptad of one or both α-helices. It was shown that this substitution destabilized the CC thermodynamically and mechanically. Specifically, the barrier height was decreased and the distance to the transition state increased. This suggests that a possible stabilizing effect of the increased hydrophobicity is overruled by a disturbed packing, which originates from a bad fit of the fluorinated amino acid into the local environment. This in turn increases the flexibility at the interface, as also observed for the hydrophobic core substitution described above. In combination, this confirms that the arrangement of the hydrophobic side chains is an additional crucial factor determining the mechanical stability of CCs.
In conclusion, this work shows that knowledge of the thermodynamic stability alone is not sufficient to predict the mechanical stability of CCs. It is the interplay between helix propensity and hydrophobic core packing that defines the sequence-structure-mechanics relationship. In combination, both parameters determine the relative contribution of processes parallel and perpendicular to the force axis, i.e. helix uncoiling and uncoiling-assisted sliding as well as dissociation. This new mechanistic knowledge provides insight into the mechanical function of CCs in tissues and opens up the road for designing CCs with pre-defined mechanical properties. The library of mechanically characterized CCs developed in this work is a powerful starting point for a wide spectrum of applications, ranging from molecular force sensors to mechanosensitive crosslinks in protein nanostructures and synthetic extracellular matrix mimics.
In this paper, we study the effect of exogenous global crop price changes on migration from agricultural and non-agricultural households in Sub-Saharan Africa. We show that, similar to the effect of positive local weather shocks, the effect of a locally-relevant global crop price increase on household out-migration depends on the initial household wealth. Higher international producer prices relax the budget constraint of poor agricultural households and facilitate migration. The order of magnitude of a standardized price effect is approx. one third of the standardized effect of a local weather shock. Unlike positive weather shocks, which mostly facilitate internal rural-urban migration, positive income shocks through rising producer prices only increase migration to neighboring African countries, likely due to the simultaneous decrease in real income in nearby urban areas. Finally, we show that while higher producer prices induce conflict, conflict does not play a role for the household decision to send a member as a labor migrant.
We apply the 3-dimensional radiative transport codeWind3D to 3D hydrodynamic models of Corotating Interaction Regions to fit the detailed variability of Discrete Absorption Components observed in Si iv UV resonance lines of HD 64760 (B0.5 Ib). We discuss important effects of the hydrodynamic input parameters on these large-scale equatorial wind structures that determine the detailed morphology of the DACs computed with 3D transfer. The best fit model reveals that the CIR in HD 64760 is produced by a source at the base of the wind that lags behind the stellar surface rotation. The non-corotating coherent wind structure is an extended density wave produced by a local increase of only 0.6% in the smooth symmetric wind mass-loss rate.
Transitory starch plays a central role in the life cycle of plants. Many aspects of this important metabolism remain unknown; however, starch granules provide insight into this persistent metabolic process. Therefore, monitoring alterations in starch granules with high temporal resolution provides one significant avenue to improve understanding. Here, a previously established method that combines LCSM and safranin-O staining for in vivo imaging of transitory starch granules in leaves of Arabidopsis thaliana was employed to demonstrate, for the first time, the alterations in starch granule size and morphology that occur both throughout the day and during leaf aging. Several starch-related mutants were included, which revealed differences among the generated granules. In ptst2 and sex1-8, the starch granules in old leaves were much larger than those in young leaves; however, the typical flattened discoid morphology was maintained. In ss4 and dpe2/phs1/ss4, the morphology of starch granules in young leaves was altered, with a more rounded shape observed. With leaf development, the starch granules became spherical exclusively in dpe2/phs1/ss4. Thus, the presented data provide new insights to contribute to the understanding of starch granule morphogenesis.
The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km.
The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term.
The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions.
The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.
Numerical magnitude information is assumed to be spatially represented in the form of a mental number line defined with respect to a body-centred, egocentric frame of reference. In this context, spatial language skills such as mastery of verbal descriptions of spatial position (e.g., in front of, behind, to the right/left) have been proposed to be relevant for grasping spatial relations between numerical magnitudes on the mental number line. We examined 4- to 5-year-old’s spatial language skills in tasks that allow responses in egocentric and allocentric frames of reference, as well as their relative understanding of numerical magnitude (assessed by a number word comparison task). In addition, we evaluated influences of children’s absolute understanding of numerical magnitude assessed by their number word comprehension (montring different numbers using their fingers) and of their knowledge on numerical sequences (determining predecessors and successors as well as identifying missing dice patterns of a series). Results indicated that when considering responses that corresponded to the egocentric perspective, children’s spatial language was associated significantly with their relative numerical magnitude understanding, even after controlling for covariates, such as children’s SES, mental rotation skills, and also absolute magnitude understanding or knowledge on numerical sequences. This suggests that the use of egocentric reference frames in spatial language may facilitate spatial representation of numbers along a mental number line and thus seem important for preschoolers’ relative understanding of numerical magnitude.
A fine-grained slope that exhibits slow movement rates was investigated to understand how geohydrological processes contribute to a consecutive development of mass movements in the Vorarlberg Alps, Austria. For that purpose intensive hydrometeorological, hydrogeological and geotechnical observations as well as surveying of surface movement rates were conducted during 1998–2001. Subsurface water dynamics at the creeping slope turned out to be dominated by a three-dimensional pressure system. The pressure reaction is triggered by fast infiltration of surface water and subsequent lateral water flow in the south-western part of the hillslope. The related pressure signal was shown to propagate further downhill, causing fast reactions of the piezometric head at 5Ð5 m depth on a daily time scale. The observed pressure reactions might belong to a temporary hillslope water body that extends further downhill. The related buoyancy forces could be one of the driving forces for the mass movement. A physically based hydrological model was adopted to model simultaneously surface and subsurface water dynamics including evapotranspiration and runoff production. It was possible to reproduce surface runoff and observed pressure reactions in principle. However, as soil hydraulic functions were only estimated on pedotransfer functions, a quantitative comparison between observed and simulated subsurface dynamics is not feasible. Nevertheless, the results suggest that it is possible to reconstruct important spatial structures based on sparse observations in the field which allow reasonable simulations with a physically based hydrological model. Copyright 2005 John Wiley & Sons, Ltd. KEY WORDS rainfall-induced landslides; soil creep; hydrological modelling; Vorarlberg; Austria; pressure propagation
Development of competence-oriented curricula is still an important theme in informatics education. Unfortunately informatics curricula, which include the domain of logic programming, are still input-orientated or lack detailed competence descriptions. Therefore, the development of competence model and of learning outcomes' descriptions is essential for the learning process in this domain. A prior research developed both. The next research step is to formulate test items to measure the described learning outcomes. This article describes this procedure and exemplifies test items. It also relates a test in school to the items and shows which misconceptions and typical errors are important to discuss in class. The test result can also confirm or disprove the competence model. Therefore, this school test is important for theoretical research as well as for the concrete planning of lessons. Quantitative analysis in school is important for evaluation and improvement of informatics education.
Clumping in Galactic WN stars : a comparison of mass loss rates from UV/optical & radio diagnostics
(2007)
The mass loss rates and other parameters for a large sample of Galactic WN stars have been revised by Hamann et al. (2006), using the most up-to date Potsdam Wolf-Rayet (PoWR) model atmospheres. For a sub-sample of these stars exist measurements of their radio free-free emission. After harmonizing the adopted distance and terminal wind velocities, we compare the mass loss rates obtained from the two diagnostics. The differences are discussed as a possible consequence of different clumping contrast in the line-forming and radio-emitting regions.
Dementia as one of the most prevalent diseases urges for a better understanding of the central mechanisms responsible for clinical symptoms, and necessitates improvement of actual diagnostic capabilities. The brainstem nucleus locus coeruleus (LC) is a promising target for early diagnosis because of its early structural alterations and its relationship to the functional disturbances in the patients. In this study, we applied our improved method of localisation-based LC resting-state fMRI to investigate the differences in central sensory signal processing when comparing functional connectivity (fc) of a patient group with mild cognitive impairment (MCI, n = 28) and an age-matched healthy control group (n = 29). MCI and control participants could be differentiated in their Mini-Mental-State-Examination (MMSE) scores (p < .001) and LC intensity ratio (p = .010). In the fMRI, LC fc to anterior cingulate cortex (FDR p < .001) and left anterior insula (FDR p = .012) was elevated, and LC fc to right temporoparietal junction (rTPJ, FDR p = .012) and posterior cingulate cortex (PCC, FDR p = .021) was decreased in the patient group. Importantly, LC to rTPJ connectivity was also positively correlated to MMSE scores in MCI patients (p = .017). Furthermore, we found a hyperactivation of the left-insula salience network in the MCI patients. Our results and our proposed disease model shed new light on the functional pathogenesis of MCI by directing to attentional network disturbances, which could aid new therapeutic strategies and provide a marker for diagnosis and prediction of disease progression.
We present XMM-Newton Reflection Grating Spectrometer observations of pairs of X-ray emission line profiles from the O star ζ Pup that originate from the same He-like ion. The two profiles in each pair have different shapes and cannot both be consistently fit by models assuming the same wind parameters. We show that the differences in profile shape can be accounted for in a model including the effects of resonance scattering, which affects the resonance line in the pair but not the intercombination line. This implies that resonance scattering is also important in single resonance lines, where its effect is difficult to distinguish from a low effective continuum optical depth in the wind. Thus, resonance scattering may help reconcile X-ray line profile shapes with literature mass-loss rates.
The traditional purpose of algorithm in education is to prepare students for programming. In our effort to introduce the practically missing computing science into Czech general secondary education, we have revisited this purpose.We propose an approach, which is in better accordance with the goals of general secondary education in Czechia. The importance of programming is diminishing, while recognition of algorithmic procedures and precise (yet concise) communication of algorithms is gaining importance. This includes expressing algorithms in natural language, which is more useful for most of the students than programming. We propose criteria to evaluate such descriptions. Finally, an idea about the limitations is required (inefficient algorithms, unsolvable problems, Turing’s test). We describe these adjusted educational goals and an outline of the resulting course. Our experience with carrying out the proposed intentions is satisfactory, although we did not accomplish all the defined goals.