Refine
Has Fulltext
- yes (342) (remove)
Year of publication
- 2016 (342) (remove)
Document Type
- Postprint (209)
- Doctoral Thesis (96)
- Preprint (16)
- Monograph/Edited Volume (8)
- Article (4)
- Master's Thesis (4)
- Habilitation Thesis (2)
- Part of Periodical (2)
- Conference Proceeding (1)
Language
- English (342) (remove)
Is part of the Bibliography
- yes (342) (remove)
Keywords
- model (6)
- climate-change (5)
- sentence processing (5)
- German (4)
- aggression (4)
- evolution (4)
- language (4)
- prosody (4)
- protein (4)
- syntax (4)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (79)
- Humanwissenschaftliche Fakultät (38)
- Institut für Geowissenschaften (37)
- Institut für Chemie (32)
- Institut für Physik und Astronomie (30)
- Institut für Biochemie und Biologie (24)
- Department Linguistik (20)
- Institut für Mathematik (19)
- Department Psychologie (18)
- Department Sport- und Gesundheitswissenschaften (15)
Background:
Plant phenotypic data shrouds a wealth of information which, when accurately analysed and linked
to other data types, brings to light the knowledge about the mechanisms of life. As phenotyping is a field of research
comprising manifold, diverse and time
‑consuming experiments, the findings can be fostered by reusing and combin‑
ing existing datasets. Their correct interpretation, and thus replicability, comparability and interoperability, is possible
provided that the collected observations are equipped with an adequate set of metadata. So far there have been no
common standards governing phenotypic data description, which hampered data exchange and reuse.
Results:
In this paper we propose the guidelines for proper handling of the information about plant phenotyping
experiments, in terms of both the recommended content of the description and its formatting. We provide a docu‑
ment called “Minimum Information About a Plant Phenotyping Experiment”, which specifies what information about
each experiment should be given, and a Phenotyping Configuration for the ISA
‑Tab format, which allows to practically
organise this information within a dataset. We provide examples of ISA
‑Tab
‑formatted phenotypic data, and a general
description of a few systems where the recommendations have been implemented.
Conclusions:
Acceptance of the rules described in this paper by the plant phenotyping community will help to
achieve findable, accessible, interoperable and reusable data.
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
The knowledge of the contemporary in situ stress state is a key issue for safe and sustainable subsurface engineering. However, information on the orientation and magnitudes of the stress state is limited and often not available for the areas of interest. Therefore 3-D geomechanical-numerical modelling is used to estimate the in situ stress state and the distance of faults from failure for application in subsurface engineering. The main challenge in this approach is to bridge the gap in scale between the widely scattered data used for calibration of the model and the high resolution in the target area required for the application. We present a multi-stage 3-D geomechanical-numerical approach which provides a state-of-the-art model of the stress field for a reservoir-scale area from widely scattered data records. Therefore, we first use a large-scale regional model which is calibrated by available stress data and provides the full 3-D stress tensor at discrete points in the entire model volume. The modelled stress state is used subsequently for the calibration of a smaller-scale model located within the large-scale model in an area without any observed stress data records. We exemplify this approach with two-stages for the area around Munich in the German Molasse Basin. As an example of application, we estimate the scalar values for slip tendency and fracture potential from the model results as measures for the criticality of fault reactivation in the reservoir-scale model. The modelling results show that variations due to uncertainties in the input data are mainly introduced by the uncertain material properties and missing S-Hmax magnitude estimates needed for a more reliable model calibration. This leads to the conclusion that at this stage the model's reliability depends only on the amount and quality of available stress information rather than on the modelling technique itself or on local details of the model geometry. Any improvements in modelling and increases in model reliability can only be achieved using more high-quality data for calibration.
The all-female Amazon molly (Poecilia formosa) originated from a single hybridization of two bisexual ancestors, Atlantic molly (Poecilia mexicana) and sailfin molly (Poecilia latipinna). As a gynogenetic species, the Amazon molly needs to copulate with a heterospecific male, but the genetic information of the sperm-donor does not contribute to the next generation, as the sperm only acts as the trigger for the diploid eggs’ embryogenesis. Here, we study the sequence evolution and gene expression of the duplicated genes coding for androgen receptors (ars) and other pathway-related genes, i.e., the estrogen receptors (ers) and cytochrome P450, family19, subfamily A, aromatase genes (cyp19as), in the Amazon molly, in comparison to its bisexual ancestors. Mollies possess–as most other teleost fish—two copies of the ar, er, and cyp19a genes, i.e., arα/arβ, erα/erβ1, and cyp19a1 (also referred as cyp19a1a)/cyp19a2 (also referred to as cyp19a1b), respectively. Non-synonymous single nucleotide polymorphisms (SNPs) among the ancestral bisexual species were generally predicted not to alter protein function. Some derived substitutions in the P. mexicana and one in P. formosa are predicted to impact protein function. We also describe the gene expression pattern of the ars and pathway-related genes in various tissues (i.e., brain, gill, and ovary) and provide SNP markers for allele specific expression research. As a general tendency, the levels of gene expression were lowest in gill and highest in ovarian tissues, while expression levels in the brain were intermediate in most cases. Expression levels in P. formosa were conserved where expression did not differ between the two bisexual ancestors. In those cases where gene expression levels significantly differed between the bisexual species, P. formosa expression was always comparable to the higher expression level among the two ancestors. Interestingly, erβ1 was expressed neither in brain nor in gill in the analyzed three molly species, which implies a more important role of erα in the estradiol synthesis pathway in these tissues. Furthermore, our data suggest that interactions of steroid-signaling pathway genes differ across tissues, in particular the interactions of ars and cyp19as.
Venomous snakes often display extensive variation in venom composition both between and within species. However, the mechanisms underlying the distribution of different toxins and venom types among populations and taxa remain insufficiently known. Rattlesnakes (Crotalus, Sistrurus) display extreme inter-and intraspecific variation in venom composition, centered particularly on the presence or absence of presynaptically neurotoxic phospholipases A2 such as Mojave toxin (MTX). Interspecific hybridization has been invoked as a mechanism to explain the distribution of these toxins across rattlesnakes, with the implicit assumption that they are adaptively advantageous. Here, we test the potential of adaptive hybridization as a mechanism for venom evolution by assessing the distribution of genes encoding the acidic and basic subunits of Mojave toxin across a hybrid zone between MTX-positive Crotalus scutulatus and MTX-negative C. viridis in southwestern New Mexico, USA. Analyses of morphology, mitochondrial and single copy-nuclear genes document extensive admixture within a narrow hybrid zone. The genes encoding the two MTX subunits are strictly linked, and found in most hybrids and backcrossed individuals, but not in C. viridis away from the hybrid zone. Presence of the genes is invariably associated with presence of the corresponding toxin in the venom. We conclude that introgression of highly lethal neurotoxins through hybridization is not necessarily favored by natural selection in rattlesnakes, and that even extensive hybridization may not lead to introgression of these genes into another species.
Metal-containing ionic liquids (ILs) are of interest for a variety of technical applications, e.g., particle synthesis and materials with magnetic or thermochromic properties. In this paper we report the synthesis of, and two structures for, some new tetrabromidocuprates(II) with several “onium” cations in comparison to the results of electron paramagnetic resonance (EPR) spectroscopic analyses. The sterically demanding cations were used to separate the paramagnetic Cu(II) ions for EPR measurements. The EPR hyperfine structure in the spectra of these new compounds is not resolved, due to the line broadening resulting from magnetic exchange between the still-incomplete separated paramagnetic Cu(II) centres. For the majority of compounds, the principal g values (g|| and gK) of the tensors could be determined and information on the structural changes in the [CuBr4]2- anions can be obtained. The complexes have high potential, e.g., as ionic liquids, as precursors for the synthesis of copper bromide particles, as catalytically active or paramagnetic ionic liquids.
We present results on ultrafast gas electron diffraction (UGED) experiments with femtosecond resolution using the MeV electron gun at SLAC National Accelerator Laboratory. UGED is a promising method to investigate molecular dynamics in the gas phase because electron pulses can probe the structure with a high spatial resolution. Until recently, however, it was not possible for UGED to reach the relevant timescale for the motion of the nuclei during a molecular reaction. Using MeV electron pulses has allowed us to overcome the main challenges in reaching femtosecond resolution, namely delivering short electron pulses on a gas target, overcoming the effect of velocity mismatch between pump laser pulses and the probe electron pulses, and maintaining a low timing jitter. At electron kinetic energies above 3 MeV, the velocity mismatch between laser and electron pulses becomes negligible. The relativistic electrons are also less susceptible to temporal broadening due to the Coulomb force. One of the challenges of diffraction with relativistic electrons is that the small de Broglie wavelength results in very small diffraction angles. In this paper we describe the new setup and its characterization, including capturing static diffraction patterns of molecules in the gas phase, finding time-zero with sub-picosecond accuracy and first time-resolved diffraction experiments. The new device can achieve a temporal resolution of 100 fs root-mean-square, and sub-angstrom spatial resolution. The collimation of the beam is sufficient to measure the diffraction pattern, and the transverse coherence is on the order of 2 nm. Currently, the temporal resolution is limited both by the pulse duration of the electron pulse on target and by the timing jitter, while the spatial resolution is limited by the average electron beam current and the signal-to-noise ratio of the detection system. We also discuss plans for improving both the temporal resolution and the spatial resolution.
Background: The efficiency of multiplex editing in plants by the RNA-guided Cas9 system is limited by efficient introduction of its components into the genome and by their activity. The possibility of introducing large fragment deletions by RNA-guided Cas9 tool provides the potential to study the function of any DNA region of interest in its
‘endogenous’ environment.
Results: Here, an RNA-guided Cas9 system was optimized to enable efficient multiplex editing in Arabidopsis thaliana. We demonstrate the flexibility of our system for knockout of multiple genes, and to generate heritable largefragment deletions in the genome. As a proof of concept, the function of part of the second intron of the flower development gene AGAMOUS in Arabidopsis was studied by generating a Cas9-free mutant plant line in which part of this intron was removed from the genome. Further analysis revealed that deletion of this intron fragment results 40 % decrease of AGAMOUS gene expression without changing the splicing of the gene which indicates that this regulatory region functions as an activator of AGAMOUS gene expression.
Conclusions: Our modified RNA-guided Cas9 system offers a versatile tool for the functional dissection of coding and non-coding DNA sequences in plants.
The eukaryotic-specific Isd11 is a complex- orphan protein with ability to bind the prokaryotic IscS
(2016)
The eukaryotic protein Isd11 is a chaperone that binds and stabilizes the central component of the essential metabolic pathway responsible for formation of iron-sulfur clusters in mitochondria, the desulfurase Nfs1. Little is known about the exact role of Isd11. Here, we show that human Isd11 (ISD11) is a helical protein which exists in solution as an equilibrium between monomer, dimeric and tetrameric species when in the absence of human Nfs1 (NFS1). We also show that, surprisingly, recombinant ISD11 expressed in E. coli co-purifies with the bacterial orthologue of NFS1, IscS. Binding is weak but specific suggesting that, despite the absence of Isd11 sequences in bacteria, there is enough conservation between the two desulfurases to retain a similar mode of interaction. This knowledge may inform us on the conservation of the mode of binding of Isd11 to the desulfurase. We used evolutionary evidence to suggest Isd11 residues involved in the interaction.
In this study, we investigated the scale sizes of equatorial plasma irregularities (EPIs) using measurements from the Swarm satellites during its early mission and final constellation phases. We found that with longitudinal separation between Swarm satellites larger than 0.4°, no significant correlation was found any more. This result suggests that EPI structures include plasma density scale sizes less than 44 km in the zonal direction. During the Swarm earlier mission phase, clearly better EPI correlations are obtained in the northern hemisphere, implying more fragmented irregularities in the southern hemisphere where the ambient magnetic field is low. The previously reported inverted-C shell structure of EPIs is generally confirmed by the Swarm observations in the northern hemisphere, but with various tilt angles. From the Swarm spacecrafts with zonal separations of about 150 km, we conclude that larger zonal scale sizes of irregularities exist during the early evening hours (around 1900 LT).
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
Gut bacteria exert beneficial and harmful effects in metabolic diseases as deduced from the comparison of germfree and conventional mice and from fecal transplantation studies. Compositional microbial changes in diseased subjects have been linked to adiposity, type 2 diabetes and dyslipidemia. Promotion of an increased expression of intestinal nutrient transporters or a modified lipid and bile acid metabolism by the intestinal microbiota could result in an increased nutrient absorption by the host. The degradation of dietary fiber and the subsequent fermentation of monosaccharides to short-chain fatty acids (SCFA) is one of the most controversially discussed mechanisms of how gut bacteria impact host physiology. Fibers reduce the energy density of the diet, and the resulting SCFA promote intestinal gluconeogenesis, incretin formation and subsequently satiety. However, SCFA also deliver energy to the host and support liponeogenesis. Thus far, there is little knowledge on bacterial species that promote or prevent metabolic disease. Clostridium ramosum and Enterococcus cloacae were demonstrated to promote obesity in gnotobiotic mouse models, whereas bifidobacteria and Akkermansia muciniphila were associated with favorable phenotypes in conventional mice, especially when oligofructose was fed. How diet modulates the gut microbiota towards a beneficial or harmful composition needs further research. Gnotobiotic animals are a valuable tool to elucidate mechanisms underlying diet-host-microbe interactions.
Proteins are amphiphilic and adsorb at liquid interfaces. Therefore, they can be efficient stabilizers of foams and emulsions. β-lactoglobulin (BLG) is one of the most widely studied proteins due to its major industrial applications, in particular in food technology.
In the present work, the influence of different bulk concentration, solution pH and ionic strength on the dynamic and equilibrium pressures of BLG adsorbed layers at the solution/tetradecane (W/TD) interface has been investigated. Dynamic interfacial pressure (Π) and interfacial dilational elastic modulus (E’) of BLG solutions for various concentrations at three different pH values of 3, 5 and 7 at a fixed ionic strength of 10 mM and for a selected fixed concentration at three different ionic strengths of 1 mM, 10 mM and 100 mM are measured by Profile Analysis Tensiometer PAT-1 (SINTERFACE Technologies, Germany). A quantitative data analysis requires additional consideration of depletion due to BLG adsorption at the interface at low protein bulk concentrations. This fact makes experiments more efficient when oil drops are studied in the aqueous protein solutions rather than solution drops formed in oil. On the basis of obtained experimental data, concentration dependencies and the effect of solution pH on the protein surface activity was qualitatively analysed. In the presence of 10 mM buffer, we observed that generally the adsorbed amount is increasing with increasing BLG bulk concentration for all three pH values. The adsorption kinetics at pH 5 result in the highest Π values at any time of adsorption while it exhibits a less active behaviour at pH 3.
Since the experimental data have not been in a good agreement with the classical diffusion controlled model due to the conformational changes which occur when the protein molecules get in contact with the hydrophobic oil phase in order to adapt to the interfacial environment, a new theoretical model is proposed here. The adsorption kinetics data were analysed with the newly proposed model, which is the classical diffusion model but modified by assuming an additional change in the surface activity of BLG molecules when adsorbing at the interface. This effect can be expressed through the adsorption activity constant in the corresponding equation of state. The dilational visco-elasticity of the BLG adsorbed interfacial layers is determined from measured dynamic interfacial tensions during sinusoidal drop area variations. The interfacial tension responses to these harmonic drop oscillations are interpreted with the same thermodynamic model which is used for the corresponding adsorption isotherm.
At a selected BLG concentration of 2×10-6 mol/l, the influence of the ionic strength using different buffer concentration of 1, 10 and 100 mM on the interfacial pressure was studied. It is affected weakly at pH 5, whereas it has a strong impact by increasing buffer concentration at pH 3 and 7. In conclusion, the structure formation of BLG adsorbed layer in the early stage of adsorption at the W/TD interface is similar to those of the solution/air (W/A) surface. However, the equation of state at the W/TD interface provides an adsorption activity constant which is almost two orders of magnitude higher than that for the solution/air surface.
At the end of this work, a new experimental tool called Drop and Bubble Micro Manipulator DBMM (SINTERFACE Technologies, Germany) has been introduced to study the stability of protein covered bubbles against coalescence. Among the available protocols the lifetime between the moment of contact and coalescence of two contacting bubble is determined for different BLG concentrations. The adsorbed amount of BLG is determined as a function of time and concentration and correlates with the observed coalescence behaviour of the contacting bubbles.
Background:
Deception can distort psychological tests on socially sensitive topics. Understanding the cerebral
processes that are involved in such faking can be useful in detection and prevention of deception. Previous research
shows that faking a brief implicit association test (BIAT ) evokes a characteristic ERP response. It is not yet known
whether temporarily available self-control resources moderate this response. We randomly assigned 22 participants
(15 females, 24.23
±
2.91
years old) to a counterbalanced repeated-measurements design. Participants first com-
pleted a Brief-IAT (BIAT ) on doping attitudes as a baseline measure and were then instructed to fake a negative dop
-
ing attitude both when self-control resources were depleted and non-depleted. Cerebral activity during BIAT perfor
-
mance was assessed using high-density EEG.
Results:
Compared to the baseline BIAT, event-related potentials showed a first interaction at the parietal P1,
while significant post hoc differences were found only at the later occurring late positive potential. Here, signifi-
cantly decreased amplitudes were recorded for ‘normal’ faking, but not in the depletion condition. In source space,
enhanced activity was found for ‘normal’ faking in the bilateral temporoparietal junction. Behaviorally, participants
were successful in faking the BIAT successfully in both conditions.
Conclusions:
Results indicate that temporarily available self-control resources do not affect overt faking success on
a BIAT. However, differences were found on an electrophysiological level. This indicates that while on a phenotypical
level self-control resources play a negligible role in deliberate test faking the underlying cerebral processes are markedly different.
The UN sustainable development goals contain environmental, economic, and social objectives. They may only be reached, or at least it would be easier to reach them, if instead of a trade-off between these objectives that implies a need for balancing them, there are synergies to be reaped. This paper discusses how the structures of economic models typically used in policy analysis influence whether win-win strategies for the environment and the economy can be conceptualised and analysed. With a focus on climate policy modelling, the paper points out how, by construction, commonly used model structures find mitigation costs rather than benefits. This paper describes mechanisms that, when added to these model structures, can bring win- win options into a model's solution horizon, and which provide a spectrum of alternative modelling approaches that allow for the identification of such options.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.
The interaction of water with α-alumina (i.e. α-Al2O3) surfaces is important in a variety of applications and a useful model for the interaction of water with environmentally abundant aluminosilicate phases. Despite its significance, studies of water interaction with α-Al2O3 surfaces other than the (0001) are extremely limited. Here we characterize the interaction of water (D2O) with a well defined α-Al2O3(1[1 with combining macron]02) surface in UHV both experimentally, using temperature programmed desorption and surface-specific vibrational spectroscopy, and theoretically, using periodic-slab density functional theory calculations. This combined approach makes it possible to demonstrate that water adsorption occurs only at a single well defined surface site (the so-called 1–4 configuration) and that at this site the barrier between the molecularly and dissociatively adsorbed forms is very low: 0.06 eV. A subset of OD stretch vibrations are parallel to this dissociation coordinate, and thus would be expected to be shifted to low frequencies relative to an uncoupled harmonic oscillator. To quantify this effect we solve the vibrational Schrödinger equation along the dissociation coordinate and find fundamental frequencies red-shifted by more than 1500 cm−1. Within the context of this model, at moderate temperatures, we further find that some fraction of surface deuterons are likely delocalized: dissociatively and molecularly absorbed states are no longer distinguishable.
Background
Doping presents a potential health risk for young athletes. Prevention programs are intended to prevent doping by educating athletes about banned substances. However, such programs have their limitations in practice. This led Germany to introduce the National Doping Prevention Plan (NDPP), in hopes of ameliorating the situation among young elite athletes. Two studies examined 1) the degree to which the NDPP led to improved prevention efforts in elite sport schools, and 2) the extent to which newly developed prevention activities of the national anti-doping agency (NADA) based on the NDPP have improved knowledge among young athletes within elite sports schools.
Methods
The first objective was investigated in a longitudinal study (Study I: t0 = baseline, t1 = follow-up 4 years after NDPP introduction) with N = 22 teachers engaged in doping prevention in elite sports schools. The second objective was evaluated in a cross-sectional comparison study (Study II) in N = 213 elite sports school students (54.5 % male, 45.5 % female, age M = 16.7 ± 1.3 years (all students had received the improved NDDP measure in school; one student group had received additionally NADA anti-doping activities and a control group did not). Descriptive statistics were calculated, followed by McNemar tests, Wilcoxon tests and Analysis of Covariance (ANCOVA).
Results
Results indicate that 4 years after the introduction of the NDPP there have been limited structural changes with regard to the frequency, type, and scope of doping prevention in elite sport schools. On the other hand, in study II, elite sport school students who received further NADA anti-doping activities performed better on an anti-doping knowledge test than students who did not take part (F(1, 207) = 33.99, p <0.001), although this difference was small.
Conclusion
The integration of doping-prevention in elite sport schools as part of the NDPP was only partially successful. The results of the evaluation indicate that the introduction of the NDPP has contributed more to a change in the content of doping prevention activities than to a structural transformation in anti-doping education in elite sport schools. Moreover, while students who did receive additional education in the form of the NDPP“booster sessions” had significantly more knowledge about doping than students who did not receive such education, this difference was only small and may not translate to actual behavior.
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
In many statistical applications, the aim is to model the relationship between covariates and some outcomes. A choice of the appropriate model depends on the outcome and the research objectives, such as linear models for continuous outcomes, logistic models for binary outcomes and the Cox model for time-to-event data. In epidemiological, medical, biological, societal and economic studies, the logistic regression is widely used to describe the relationship between a response variable as binary outcome and explanatory variables as a set of covariates. However, epidemiologic cohort studies are quite expensive regarding data management since following up a large number of individuals takes long time. Therefore, the case-cohort design is applied to reduce cost and time for data collection. The case-cohort sampling collects a small random sample from the entire cohort, which is called subcohort. The advantage of this design is that the covariate and follow-up data are recorded only on the subcohort and all cases (all members of the cohort who develop the event of interest during the follow-up process).
In this thesis, we investigate the estimation in the logistic model for case-cohort design. First, a model with a binary response and a binary covariate is considered. The maximum likelihood estimator (MLE) is described and its asymptotic properties are established. An estimator for the asymptotic variance of the estimator based on the maximum likelihood approach is proposed; this estimator differs slightly from the estimator introduced by Prentice (1986). Simulation results for several proportions of the subcohort show that the proposed estimator gives lower empirical bias and empirical variance than Prentice's estimator.
Then the MLE in the logistic regression with discrete covariate under case-cohort design is studied. Here the approach of the binary covariate model is extended. Proving asymptotic normality of estimators, standard errors for the estimators can be derived. The simulation study demonstrates the estimation procedure of the logistic regression model with a one-dimensional discrete covariate. Simulation results for several proportions of the subcohort and different choices of the underlying parameters indicate that the estimator developed here performs reasonably well. Moreover, the comparison between theoretical values and simulation results of the asymptotic variance of estimator is presented.
Clearly, the logistic regression is sufficient for the binary outcome refers to be available for all subjects and for a fixed time interval. Nevertheless, in practice, the observations in clinical trials are frequently collected for different time periods and subjects may drop out or relapse from other causes during follow-up. Hence, the logistic regression is not appropriate for incomplete follow-up data; for example, an individual drops out of the study before the end of data collection or an individual has not occurred the event of interest for the duration of the study. These observations are called censored observations. The survival analysis is necessary to solve these problems. Moreover, the time to the occurence of the event of interest is taken into account. The Cox model has been widely used in survival analysis, which can effectively handle the censored data. Cox (1972) proposed the model which is focused on the hazard function. The Cox model is assumed to be
λ(t|x) = λ0(t) exp(β^Tx)
where λ0(t) is an unspecified baseline hazard at time t and X is the vector of covariates, β is a p-dimensional vector of coefficient.
In this thesis, the Cox model is considered under the view point of experimental design. The estimability of the parameter β0 in the Cox model, where β0 denotes the true value of β, and the choice of optimal covariates are investigated. We give new representations of the observed information matrix In(β) and extend results for the Cox model of Andersen and Gill (1982). In this way conditions for the estimability of β0 are formulated. Under some regularity conditions, ∑ is the inverse of the asymptotic variance matrix of the MPLE of β0 in the Cox model and then some properties of the asymptotic variance matrix of the MPLE are highlighted. Based on the results of asymptotic estimability, the calculation of local optimal covariates is considered and shown in examples. In a sensitivity analysis, the efficiency of given covariates is calculated. For neighborhoods of the exponential models, the efficiencies have then been found. It is appeared that for fixed parameters β0, the efficiencies do not change very much for different baseline hazard functions. Some proposals for applicable optimal covariates and a calculation procedure for finding optimal covariates are discussed.
Furthermore, the extension of the Cox model where time-dependent coefficient are allowed, is investigated. In this situation, the maximum local partial likelihood estimator for estimating the coefficient function β(·) is described. Based on this estimator, we formulate a new test procedure for testing, whether a one-dimensional coefficient function β(·) has a prespecified parametric form, say β(·; ϑ). The score function derived from the local constant partial likelihood function at d distinct grid points is considered. It is shown that the distribution of the properly standardized quadratic form of this d-dimensional vector under the null hypothesis tends to a Chi-squared distribution. Moreover, the limit statement remains true when replacing the unknown ϑ0 by the MPLE in the hypothetical model and an asymptotic α-test is given by the quantiles or p-values of the limiting Chi-squared distribution. Finally, we propose a bootstrap version of this test. The bootstrap test is only defined for the special case of testing whether the coefficient function is constant. A simulation study illustrates the behavior of the bootstrap test under the null hypothesis and a special alternative. It gives quite good results for the chosen underlying model.
References
P. K. Andersen and R. D. Gill. Cox's regression model for counting processes: a large samplestudy. Ann. Statist., 10(4):1100{1120, 1982.
D. R. Cox. Regression models and life-tables. J. Roy. Statist. Soc. Ser. B, 34:187{220, 1972.
R. L. Prentice. A case-cohort design for epidemiologic cohort studies and disease prevention trials. Biometrika, 73(1):1{11, 1986.
The synthesis and photophysical properties of two new FRET pairs based on coumarin as a donor and DBD dye as an acceptor are described. The introduction of a bromo atom dramatically increases the two-photon excitation (2PE) cross section providing a 2PE-FRET system, which is also suitable for 2PE-FLIM.
Herein we present an efficient synthesis of a biomimetic probe with modular construction that can be specifically bound by the mannose binding FimH protein – a surface adhesion protein of E. coli bacteria. The synthesis combines the new and interesting DBD dye with the carbohydrate ligand mannose via a Click reaction. We demonstrate the binding to E. coli bacteria over a large concentration range and also present some special characteristics of those molecules that are of particular interest for the application as a biosensor. In particular, the mix-and-measure ability and the very good photo-stability should be highlighted here.
The coil-to-globule transition of poly(N-isopropylacrylamide) (PNIPAM) microgel particles suspended in water has been investigated in situ as a function of heating and cooling rate with four optical process analytical technologies (PAT), sensitive to structural changes of the polymer. Photon Density Wave (PDW) spectroscopy, Focused Beam Reflectance Measurements (FBRM), turbidity measurements, and Particle Vision Microscope (PVM) measurements are found to be powerful tools for the monitoring of the temperature-dependent transition of such thermo-responsive polymers. These in-line technologies allow for monitoring of either the reduced scattering coefficient and the absorption coefficient, the chord length distribution, the reflected intensities, or the relative backscatter index via in-process imaging, respectively. Varying heating and cooling rates result in rate-dependent lower critical solution temperatures (LCST), with different impact of cooling and heating. Particularly, the data obtained by PDW spectroscopy can be used to estimate the thermodynamic transition temperature of PNIPAM for infinitesimal heating or cooling rates. In addition, an inverse hysteresis and a reversible building of micrometer-sized agglomerates are observed for the PNIPAM transition process.
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b(899). The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part.
When realizing a programming language as VM, implementing behavior as part of the VM, as primitive, usually results in reduced execution times. But supporting and developing primitive functions requires more effort than maintaining and using code in the hosted language since debugging is harder, and the turn-around times for VM parts are higher. Furthermore, source artifacts of primitive functions are seldom reused in new implementations of the same language. And if they are reused, the existing API usually is emulated, reducing the performance gains. Because of recent results in tracing dynamic compilation, the trade-off between performance and ease of implementation, reuse, and changeability might now be decided adversely.
In this work, we investigate the trade-offs when creating primitives, and in particular how large a difference remains between primitive and hosted function run times in VMs with tracing just-in-time compiler. To that end, we implemented the algorithmic primitive BitBlt three times for RSqueak/VM. RSqueak/VM is a Smalltalk VM utilizing the PyPy RPython toolchain. We compare primitive implementations in C, RPython, and Smalltalk, showing that due to the tracing just-in-time compiler, the performance gap has lessened by one magnitude to one magnitude.
Loss to follow-up in a randomized controlled trial study for pediatric weight management (EPOC)
(2016)
Background
Attrition is a serious problem in intervention studies. The current study analyzed the attrition rate during follow-up in a randomized controlled pediatric weight management program (EPOC study) within a tertiary care setting.
Methods
Five hundred twenty-three parents and their 7–13-year-old children with obesity participated in the randomized controlled intervention trial. Follow-up data were assessed 6 and 12 months after the end of treatment. Attrition was defined as providing no objective weight data. Demographic and psychological baseline characteristics were used to predict attrition at 6- and 12-month follow-up using multivariate logistic regression analyses.
Results
Objective weight data were available for 49.6 (67.0) % of the children 6 (12) months after the end of treatment. Completers and non-completers at the 6- and 12-month follow-up differed in the amount of weight loss during their inpatient stay, their initial BMI-SDS, educational level of the parents, and child’s quality of life and well-being. Additionally, completers supported their child more than non-completers, and at the 12-month follow-up, families with a more structured eating environment were less likely to drop out. On a multivariate level, only educational background and structure of the eating environment remained significant.
Conclusions
The minor differences between the completers and the non-completers suggest that our retention strategies were successful. Further research should focus on prevention of attrition in families with a lower educational background.
Molecular paleoclimate reconstructions over the last 9 ka from a peat sequence in South China
(2016)
To achieve a better understanding of Holocene climate change in the monsoon regions of China, we investigated the molecular distributions and carbon and hydrogen isotope compositions delta C-13 and delta D values) of long-chain n-alkanes in a peat core from the Shiwangutian SWGT) peatland, south China over the last 9 ka. By comparisons with other climate records, we found that the delta C-13 values of the long-chain n-alkanes can be a proxy for humidity, while the dD values of the long-chain n-alkanes primarily recorded the moisture source dD signal during 9-1.8 ka BP and responded to the dry climate during 1.8-0.3 ka BP. Together with the average chain length ACL) and the carbon preference index CPI) data, the climate evolution over last 9 ka in the SWGT peatland can be divided into three stages. During the first stage 9-5 ka BP), the delta C-13 values were depleted and CPI and Paq values were low, while ACL values were high. They reveal a period of warm and wet climate, which is regarded as the Holocene optimum. The second stage 5-1.8 ka BP) witnessed a shift to relatively cool and dry climate, as indicated by the more positive delta C-13 values and lower ACL values. During the third stage 1.8-0.3 ka BP), the delta C-13, delta D, CPI and Paq values showed marked increase and ACL values varied greatly, implying an abrupt change to cold and dry conditions. This climate pattern corresponds to the broad decline in Asian monsoon intensity through the latter part of the Holocene. Our results do not support a later Holocene optimum in south China as suggested by previous studies.
Since 1998, elite athletes’ sport injuries have been monitored in single sport event, which leads to the development of first comprehensive injury surveillance system in multi-sport Olympic Games in 2008. However, injury and illness occurred in training phases have not been systematically studied due to its multi-facets, potentially interactive risk related factors. The present thesis aim to address issues of feasibility of establishing a validated measure for injury/illness, training environment and psychosocial risk factors by creating the evaluation tool namely risk of injury questionnaire (Risk-IQ) for elite athletes, which based on IOC consensus statement 2009 recommended content of preparticipation evaluation(PPE) and periodic health exam (PHE).
A total of 335 top level athletes and a total of 88 medical care providers from Germany and Taiwan participated in tow “cross-sectional plus longitudinal” Risk-IQ and MCPQ surveys respectively. Four categories of injury/illness related risk factors questions were asked in Risk-IQ for athletes while injury risk and psychological related questions were asked in MCPQ for MCP cohorts. Answers were quantified scales wise/subscales wise before analyzed with other factors/scales. In addition, adapted variables such as sport format were introduced for difference task of analysis.
Validated with 2-wyas translation and test-retest reliabilities, the Risk-IQ was proved to be in good standard which were further confirmed by analyzed results from official surveys in both Germany and Taiwan. The result of Risk-IQ revealed that elite athletes’ accumulated total injuries, in general, were multi-factor dependent; influencing factors including but not limited to background experiences, medical history, PHE and PPE medical resources as well as stress from life events. Injuries of different body parts were sport format and location specific. Additionally, medical support of PPE and PHE indicated significant difference between German and Taiwan.
The result of the present thesis confirmed that it is feasible to construct a comprehensive evalua-tion instrument for heterogeneous elite athletes cohorts’ risk factor analysis for injury/illness oc-curred during their non-competition periods. In average and with many moderators involved, Ger-man elite athletes have superior medical care support yet suffered more severe injuries than Tai-wanese counterparts. Opinions of injury related psychological issues reflected differently on vari-ous MCP groups irrespective of different nationalities. In general, influencing factors and interac-tions existed among relevant factors in both studies which implied further investigation with multiple regression analysis is needed for better understanding.
We tested the influence of two light intensities [40 and 300 μmol PAR / (m2s)] on the fatty acid composition of three distinct lipid classes in four freshwater phytoplankton species. We chose species of different taxonomic classes in order to detect potentially similar reaction characteristics that might also be present in natural phytoplankton communities. From samples of the bacillariophyte Asterionella formosa, the chrysophyte Chromulina sp., the cryptophyte Cryptomonas ovata and the zygnematophyte Cosmarium botrytis we first separated glycolipids (monogalactosyldiacylglycerol, digalactosyldiacylglycerol, and sulfoquinovosyldiacylglycerol), phospholipids (phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, and phosphatidylserine) as well as non-polar lipids (triacylglycerols), before analyzing the fatty acid composition of each lipid class. High variation in the fatty acid composition existed among different species. Individual fatty acid compositions differed in their reaction to changing light intensities in the four species. Although no generalizations could be made for species across taxonomic classes, individual species showed clear but small responses in their ecologically-relevant omega-3 and omega-6 polyunsaturated fatty acids (PUFA) in terms of proportions and of per tissue carbon quotas. Knowledge on how lipids like fatty acids change with environmental or culture conditions is of great interest in ecological food web studies, aquaculture, and biotechnology, since algal lipids are the most important sources of omega-3 long-chain PUFA for aquatic and terrestrial consumers, including humans.
The present work is a case study contributing to the major planning project “Suedlink”. It is structured as follows: first, in a theoretical part, mandatory theories of social acceptance (Wüstenhagen et al., 2007), steps of participation (Münnich, 2014), and the governance theory (Benz and Dose, 2011) are elaborated. Secondly, the relevant methods are discussed. Thirdly, in a qualitative analytical part, the information that were gathered from the expert interviews are analyzed with the use of the aforementioned theories. In the fourth place, an empirical quantitative analysis of data regarding the public acceptance towards Suedlink is presented.
In this case study, with the use of qualitative and quantitative methods, two questions are answered: first, which governance aspects were relevant for the priority use of underground cables for the construction of high voltage direct current transmission lines? For this question, intensive document analysis and different expert interviews were conducted. Secondly, the central question of the present work addresses the question whether local or/and individual factors affect the public acceptance towards SüdLink. Here, in particular, it is interesting to analyze if the priority use of underground cables affected the people’s acceptance towards SuedLink. In order to respond to both questions, an online survey was conducted among citizen initiatives, district administrators, and individuals in social media during March till July 2016. Thereafter, the data was analyzed with the use of descriptive quantitative methods. The data shows, that underground cables not necessarily increase public acceptance (see also Menges and Beyer, 2013). On the contrary, individual and local criteria were relevant for the survey respondents. For example criteria such as the quality of participation, distance between home and transmission lines, and the additional financial burden (taxes, higher prices for electricity) were important for the evaluation. In addition, survey respondents who participated in citizen initiatives were more critical against the priority use of underground cables and SuedLink in general. Likewise, residential homeowners rejected every form of transmission lines.
Lake Towuti is a tectonic basin, surrounded by ultramafic rocks. Lateritic soils form through weathering and deliver abundant iron (oxy)hydroxides but very little sulfate to the lake and its sediment. To characterize the sediment biogeochemistry, we collected cores at three sites with increasing water depth and decreasing bottom water oxygen concentrations. Microbial cell densities were highest at the shallow site a feature we attribute to the availability of labile organic matter (OM) and the higher abundance of electron acceptors due to oxic bottom water conditions. At the two other sites, OM degradation and reduction processes below the oxycline led to partial electron acceptor depletion. Genetic information preserved in the sediment as extracellular DNA (eDNA) provided information on aerobic and anaerobic heterotrophs related to Nitrospirae. Chloroflexi, and Therrnoplasmatales. These taxa apparently played a significant role in the degradation of sinking OM. However, eDNA concentrations rapidly decreased with core depth. Despite very low sulfate concentrations, sulfate-reducing bacteria were present and viable in sediments at all three sites, as confirmed by measurement of potential sulfate reduction rates. Microbial community fingerprinting supported the presence of taxa related to Deltaproteobacteria and Firmicutes with demonstrated capacity for iron and sulfate reduction. Concomitantly, sequences of Ruminococcaceae, Clostridiales, and Methanornicrobiales indicated potential for fermentative hydrogen and methane production. Such first insights into ferruginous sediments showed that microbial populations perform successive metabolisms related to sulfur, iron, and methane. In theory, iron reduction could reoxidize reduced sulfur compounds and desorb OM from iron minerals to allow remineralization to methane. Overall, we found that biogeochemical processes in the sediments can be linked to redox differences in the bottom waters of the three sites, like oxidant concentrations and the supply of labile OM. At the scale of the lacustrine record, our geomicrobiological study should provide a means to link the extant subsurface biosphere to past environments.
We present a temperature and fluence dependent Ultrafast X-Ray Diffraction study of a laser-heated antiferromagnetic dysprosium thin film. The loss of antiferromagnetic order is evidenced by a pronounced lattice contraction. We devise a method to determine the energy flow between the phonon and spin system from calibrated Bragg peak positions in thermal equilibrium. Reestablishing the magnetic order is much slower than the cooling of the lattice, especially around the Néel temperature. Despite the pronounced magnetostriction, the transfer of energy from the spin system to the phonons in Dy is slow after the spin-order is lost.
Introduction
Genes involved in body weight regulation that were previously investigated in genome-wide association studies (GWAS) and in animal models were target-enriched followed by massive parallel next generation sequencing.
Methods
We enriched and re-sequenced continuous genomic regions comprising FTO, MC4R, TMEM18, SDCCAG8, TKNS, MSRA and TBC1D1 in a screening sample of 196 extremely obese children and adolescents with age and sex specific body mass index (BMI) >= 99th percentile and 176 lean adults (BMI <= 15th percentile). 22 variants were confirmed by Sanger sequencing. Genotyping was performed in up to 705 independent obesity trios (extremely obese child and both parents), 243 extremely obese cases and 261 lean adults.
Results and Conclusion
We detected 20 different non-synonymous variants, one frame shift and one nonsense mutation in the 7 continuous genomic regions in study groups of different weight extremes. For SNP Arg695Cys (rs58983546) in TBC1D1 we detected nominal association with obesity (p(TDT) = 0.03 in 705 trios). Eleven of the variants were rare, thus were only detected heterozygously in up to ten individual(s) of the complete screening sample of 372 individuals. Two of them (in FTO and MSRA) were found in lean individuals, nine in extremely obese. In silico analyses of the 11 variants did not reveal functional implications for the mutations. Concordant with our hypothesis we detected a rare variant that potentially leads to loss of FTO function in a lean individual. For TBC1D1, in contrary to our hypothesis, the loss of function variant (Arg443Stop) was found in an obese individual. Functional in vitro studies are warranted.
This article is a response to calls in prior research that we need more longitudi-nal analyses to better understand the foundations of PSM and related prosocial values. There is wide agreement that it is crucial for theory-building but also for tailoring hiring practices and human resource development programs to sort out whether PSM-related values are stable or developable. The article summarizes existent theoretical expecta-tions, which turn out to be partially conflicting, and tests them against multiple waves of data from the German Socio-Economic Panel Study which covers a time period of sixteen years. It finds that PSM-related values of public employees are stable rather than dynamic but tend to increase with age and decrease with organizational member-ship. The article also examines cohort effects, which have been neglected in prior work, and finds moderate evidence that there are differences between those born during the Second World War and later generations.
The Gradient Symbolic Computation (GSC) model presented in the keynote article (Goldrick, Putnam & Schwarz) constitutes a significant theoretical development, not only as a model of bilingual code-mixing, but also as a general framework that brings together symbolic grammars and graded representations. The authors are to be commended for successfully integrating a theory of grammatical knowledge with the voluminous research on lexical co-activation in bilinguals. It is, however, unfortunate that a certain conception of bilingualism was inherited from this latter research tradition, one in which the contrast between native and non-native language takes a back seat.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
Foam fractionation of surfactant and protein solutions is a process dedicated to separate surface active molecules from each other due to their differences in surface activities. The process is based on forming bubbles in a certain mixed solution followed by detachment and rising of bubbles through a certain volume of this solution, and consequently on the formation of a foam layer on top of the solution column. Therefore, systematic analysis of this whole process comprises of at first investigations dedicated to the formation and growth of single bubbles in solutions, which is equivalent to the main principles of the well-known bubble pressure tensiometry. The second stage of the fractionation process includes the detachment of a single bubble from a pore or capillary tip and its rising in a respective aqueous solution. The third and final stage of the process is the formation and stabilization of the foam created by these bubbles, which contains the adsorption layers formed at the growing bubble surface, carried up and gets modified during the bubble rising and finally ends up as part of the foam layer.
Bubble pressure tensiometry and bubble profile analysis tensiometry experiments were performed with protein solutions at different bulk concentrations, solution pH and ionic strength in order to describe the process of accumulation of protein and surfactant molecules at the bubble surface. The results obtained from the two complementary methods allow understanding the mechanism of adsorption, which is mainly governed by the diffusional transport of the adsorbing protein molecules to the bubble surface. This mechanism is the same as generally discussed for surfactant molecules. However, interesting peculiarities have been observed for protein adsorption kinetics at sufficiently short adsorption times. First of all, at short adsorption times the surface tension remains constant for a while before it decreases as expected due to the adsorption of proteins at the surface. This time interval is called induction time and it becomes shorter with increasing protein bulk concentration. Moreover, under special conditions, the surface tension does not stay constant but even increases over a certain period of time. This so-called negative surface pressure was observed for BCS and BLG and discussed for the first time in terms of changes in the surface conformation of the adsorbing protein molecules. Usually, a negative surface pressure would correspond to a negative adsorption, which is of course impossible for the studied protein solutions. The phenomenon, which amounts to some mN/m, was rather explained by simultaneous changes in the molar area required by the adsorbed proteins and the non-ideality of entropy of the interfacial layer. It is a transient phenomenon and exists only under dynamic conditions.
The experiments dedicated to the local velocity of rising air bubbles in solutions were performed in a broad range of BLG concentration, pH and ionic strength. Additionally, rising bubble experiments were done for surfactant solutions in order to validate the functionality of the instrument. It turns out that the velocity of a rising bubble is much more sensitive to adsorbing molecules than classical dynamic surface tension measurements. At very low BLG or surfactant concentrations, for example, the measured local velocity profile of an air bubble is changing dramatically in time scales of seconds while dynamic surface tensions still do not show any measurable changes at this time scale. The solution’s pH and ionic strength are important parameters that govern the measured rising velocity for protein solutions. A general theoretical description of rising bubbles in surfactant and protein solutions is not available at present due to the complex situation of the adsorption process at a bubble surface in a liquid flow field with simultaneous Marangoni effects. However, instead of modelling the complete velocity profile, new theoretical work has been started to evaluate the maximum values in the profile as characteristic parameter for dynamic adsorption layers at the bubble surface more quantitatively.
The studies with protein-surfactant mixtures demonstrate in an impressive way that the complexes formed by the two compounds change the surface activity as compared to the original native protein molecules and therefore lead to a completely different retardation behavior of rising bubbles. Changes in the velocity profile can be interpreted qualitatively in terms of increased or decreased surface activity of the formed protein-surfactant complexes. It was also observed that the pH and ionic strength of a protein solution have strong effects on the surface activity of the protein molecules, which however, could be different on the rising bubble velocity and the equilibrium adsorption isotherms. These differences are not fully understood yet but give rise to discussions about the structure of protein adsorption layer under dynamic conditions or in the equilibrium state.
The third main stage of the discussed process of fractionation is the formation and characterization of protein foams from BLG solutions at different pH and ionic strength. Of course a minimum BLG concentration is required to form foams. This minimum protein concentration is a function again of solution pH and ionic strength, i.e. of the surface activity of the protein molecules. Although at the isoelectric point, at about pH 5 for BLG, the hydrophobicity and hence the surface activity should be the highest, the concentration and ionic strength effects on the rising velocity profile as well as on the foamability and foam stability do not show a maximum. This is another remarkable argument for the fact that the interfacial structure and behavior of BLG layers under dynamic conditions and at equilibrium are rather different. These differences are probably caused by the time required for BLG molecules to adapt respective conformations once they are adsorbed at the surface.
All bubble studies described in this work refer to stages of the foam fractionation process. Experiments with different systems, mainly surfactant and protein solutions, were performed in order to form foams and finally recover a solution representing the foamed material. As foam consists to a large extent of foam lamella – two adsorption layers with a liquid core – the concentration in a foamate taken from foaming experiments should be enriched in the stabilizing molecules. For determining the concentration of the foamate, again the very sensitive bubble rising velocity profile method was applied, which works for any type of surface active materials. This also includes technical surfactants or protein isolates for which an accurate composition is unknown.
In this contribution, we study using first principles the co-adsorption and catalytic behaviors of CO and O2 on a single gold atom deposited at defective magnesium oxide surfaces. Using cluster models and point charge embedding within a density functional theory framework, we simulate the CO oxidation reaction for Au1 on differently charged oxygen vacancies of MgO(001) to rationalize its experimentally observed lack of catalytic activity. Our results show that: (1) co-adsorption is weakly supported at F0 and F2+ defects but not at F1+ sites, (2) electron redistribution from the F0 vacancy via the Au1 cluster to the adsorbed molecular oxygen weakens the O2 bond, as required for a sustainable catalytic cycle, (3) a metastable carbonate intermediate can form on defects of the F0 type, (4) only a small activation barrier exists for the highly favorable dissociation of CO2 from F0, and (5) the moderate adsorption energy of the gold atom on the F0 defect cannot prevent insertion of molecular oxygen inside the defect. Due to the lack of protection of the color centers, the surface becomes invariably repaired by the surrounding oxygen and the catalytic cycle is irreversibly broken in the first oxidation step.
In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics "up" and "down" states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.
The link between cognitive scripts for consensual sexual interactions and attitudes towards sexual coercion was studied in 524 Polish high school students. We proposed that risky sexual scripts, containing risk elements linked to sexual aggression, would be associated with attitudes condoning sexual coercion. Pornography use and religiosity were included as predictors of participants’ risky sexual scripts and attitudes towards sexual coercion. Risky sexual scripts were linked to attitudes condoning sexual coercion. Pornography use was indirectly linked to attitudes condoning sexual coercion via risky sexual scripts. Religiosity showed a positive direct link with attitudes towards sexual coercion, but a negative indirect link through risky sexual scripts. The results are discussed regarding the significance of risky sexual scripts, pornography use, and religiosity in understanding attitudes towards sexual coercion as well as their implications for preventing sexually aggressive behaviour.
Injection of fluids into deep saline aquifers causes a pore pressure increase in the storage formation, and thus displacement of resident brine. Via hydraulically conductive faults, brine may migrate upwards into shallower aquifers and lead to unwanted salinisation of potable groundwater resources. In the present study, we investigated different scenarios for a potential storage site in the Northeast German Basin using a three-dimensional (3-D) regional-scale model that includes four major fault zones. The focus was on assessing the impact of fault length and the effect of a secondary reservoir above the storage formation, as well as model boundary conditions and initial salinity distribution on the potential salinisation of shallow groundwater resources. We employed numerical simulations of brine injection as a representative fluid.
Our simulation results demonstrate that the lateral model boundary settings and the effective fault damage zone volume have the greatest influence on pressure build-up and development within the reservoir, and thus intensity and duration of fluid flow through the faults. Higher vertical pressure gradients for short fault segments or a small effective fault damage zone volume result in the highest salinisation potential due to a larger vertical fault height affected by fluid displacement. Consequently, it has a strong impact on the degree of shallow aquifer salinisation, whether a gradient in salinity exists or the saltwater-freshwater interface lies below the fluid displacement depth in the faults. A small effective fault damage zone volume or low fault permeability further extend the duration of fluid flow, which can persist for several tens to hundreds of years, if the reservoir is laterally confined. Laterally open reservoir boundaries, large effective fault damage zone volumes and intermediate reservoirs significantly reduce vertical brine migration and the potential of freshwater salinisation because the origin depth of displaced brine is located only a few decametres below the shallow aquifer in maximum.
The present study demonstrates that the existence of hydraulically conductive faults is not necessarily an exclusion criterion for potential injection sites, because salinisation of shallower aquifers strongly depends on initial salinity distribution, location of hydraulically conductive faults and their effective damage zone volumes as well as geological boundary conditions.
It has been long agreed by formal and functional researchers (primarily based on English data) that contrastive topic marking, namely marking a constituent as a contrastive topic via the B-accent/the rising intonation contour) requires the co-occurrence of focus marking via the A-accent/the falling intonation contour (see Sturgeon 2006, and references therein). However, this consensus has recently been disputed by new findings indicating the occurrence of utterances with only B-accent, dubbed as lone contrastive topic (Büring 2003, Constant 2014). In this paper, I argue, based on the data in Vietnamese, that the presence of lone contrastive topic is just apparent, and that the focus that co-occurs with the seemingly lone contrastive topic is a verum focus.
Widespread flooding in June 2013 caused damage costs of €6 to 8 billion in Germany, and awoke many memories of the floods in August 2002, which resulted in total damage of €11.6 billion and hence was the most expensive natural hazard event in Germany up to now. The event of 2002 does, however, also mark a reorientation toward an integrated flood risk management system in Germany. Therefore, the flood of 2013 offered the opportunity to review how the measures that politics, administration, and civil society have implemented since 2002 helped to cope with the flood and what still needs to be done to achieve effective and more integrated flood risk management. The review highlights considerable improvements on many levels, in particular (1) an increased consideration of flood hazards in spatial planning and urban development, (2) comprehensive property-level mitigation and preparedness measures, (3) more effective flood warnings and improved coordination of disaster response, and (4) a more targeted maintenance of flood defense systems. In 2013, this led to more effective flood management and to a reduction of damage. Nevertheless, important aspects remain unclear and need to be clarified. This particularly holds for balanced and coordinated strategies for reducing and overcoming the impacts of flooding in large catchments, cross-border and interdisciplinary cooperation, the role of the general public in the different phases of flood risk management, as well as a transparent risk transfer system. Recurring flood events reveal that flood risk management is a continuous task. Hence, risk drivers, such as climate change, land-use changes, economic developments, or demographic change and the resultant risks must be investigated at regular intervals, and risk reduction strategies and processes must be reassessed as well as adapted and implemented in a dialogue with all stakeholders.
Complexity in software systems is a major factor driving development and maintenance costs. To master this complexity, software is divided into modules that can be developed and tested separately. In order to support this separation of modules, each module should provide a clean and concise public interface. Therefore, the ability to selectively hide functionality using access control is an important feature in a programming language intended for complex software systems.
Software systems are increasingly distributed, adding not only to their inherent complexity, but also presenting security challenges. The object-capability approach addresses these challenges by defining language properties providing only minimal capabilities to objects. One programming language that is based on the object-capability approach is Newspeak, a dynamic programming language designed for modularity and security. The Newspeak specification describes access control as one of Newspeak’s properties, because it is a requirement for the object-capability approach. However, access control, as defined in the Newspeak specification, is currently not enforced in its implementation.
This work introduces an access control implementation for Newspeak, enabling the security of object-capabilities and enhancing modularity. We describe our implementation of access control for Newspeak. We adapted the runtime environment, the reflective system, the compiler toolchain, and the virtual machine. Finally, we describe a migration strategy for the existing Newspeak code base, so that our access control implementation can be integrated with minimal effort.
This thesis presents new approaches of SAR methods and their application to tectonically active systems and related surface deformation. With 3 publications two case studies are presented:
(1) The coseismic deformation related to the Nura earthquake (5th October 2008, magnitude Mw 6.6) at the eastern termination of the intramontane Alai valley. Located between the southern Tien Shan and the northern Pamir the coseismic surface displacements are analysed using SAR (Synthetic Aperture RADAR) data. The results show clear gradients in the vertical and horizontal directions along a complex pattern of surface ruptures and active faults. To integrate and to interpret these observations in the context of the regional active tectonics a SAR data analysis is complemented with seismological data and geological field observations. The main moment release of the Nura earthquake appears to be on the Pamir Frontal thrust, while the main surface displacements and surface rupture occurred in the footwall and along of the NE–SW striking Irkeshtam fault. With InSAR data from ascending and descending satellite tracks along with pixel offset measurements the Nura earthquake source is modelled as a segmented rupture. One fault segment corresponds to high-angle brittle faulting at the Pamir Frontal thrust and two more fault segments show moderate-angle and low-friction thrusting at the Irkeshtam fault. The integrated analysis of the coseismic deformation argues for a rupture segmentation and strain partitioning associated to the earthquake. It possibly activated an orogenic wedge in the easternmost segment of the Pamir-Alai collision zone. Further, the style of the segmentation may be associated with the presence of Paleogene evaporites.
(2) The second focus is put on slope instabilities and consequent landslides in the area of prominent topographic transition between the Fergana basin and high-relief Alai range. The Alai range constitutes an active orogenic wedge of the Pamir – Tien Shan collision zone that described as a progressively northward propagating fold-and-thrust belt. The interferometric analysis of ALOS/PALSAR radar data integrates a period of 4 years (2007-2010) based on the Small Baseline Subset (SBAS) time-series technique to assess surface deformation with millimeter surface change accuracy. 118 interferograms are analyzed to observe spatially-continuous movements with downslope velocities up to 71 mm/yr. The obtained rates indicate slow movement of the deep-seated landslides during the observation time. We correlated these movements with precipitation and seismic records. The results suggest that the deformation peaks correlate with rainfall in the 3 preceding months and with one earthquake event. In the next step, to understand the spatial pattern of landslide processes, the tectonic morphologic and lithologic settings are combined with the patterns of surface deformation. We demonstrate that the lithological and tectonic structural patterns are the main controlling factors for landslide occurrence and surface deformation magnitudes. Furthermore active contractional deformation in the front of the orogenic wedge is the main mechanism to sustain relief. Some of the slower but continuously moving slope instabilities are directly related to tectonically active faults and unconsolidated young Quaternary syn-orogenic sedimentary sequences. The InSAR observed slow moving landslides represent active deep-seated gravitational slope deformation phenomena which is first time observed in the Tien Shan mountains. Our approach offers a new combination of InSAR techniques and tectonic aspects to localize and understand enhanced slope instabilities in tectonically active mountain fronts in the Kyrgyz Tien Shan.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
Plasma carotenoids, tocopherols, and retinol in the age-stratified (35–74 years) general population
(2016)
Blood micronutrient status may change with age. We analyzed plasma carotenoids, α-/γ-tocopherol, and retinol and their associations with age, demographic characteristics, and dietary habits (assessed by a short food frequency questionnaire) in a cross-sectional study of 2118 women and men (age-stratified from 35 to 74 years) of the general population from six European countries. Higher age was associated with lower lycopene and α-/β-carotene and higher β-cryptoxanthin, lutein, zeaxanthin, α-/γ-tocopherol, and retinol levels. Significant correlations with age were observed for lycopene (r = −0.248), α-tocopherol (r = 0.208), α-carotene (r = −0.112), and β-cryptoxanthin (r = 0.125; all p < 0.001). Age was inversely associated with lycopene (−6.5% per five-year age increase) and this association remained in the multiple regression model with the significant predictors (covariables) being country, season, cholesterol, gender, smoking status, body mass index (BMI (kg/m2)), and dietary habits. The positive association of α-tocopherol with age remained when all covariates including cholesterol and use of vitamin supplements were included (1.7% vs. 2.4% per five-year age increase). The association of higher β-cryptoxanthin with higher age was no longer statistically significant after adjustment for fruit consumption, whereas the inverse association of α-carotene with age remained in the fully adjusted multivariable model (−4.8% vs. −3.8% per five-year age increase). We conclude from our study that age is an independent predictor of plasma lycopene, α-tocopherol, and α-carotene.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10–15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Touring Katutura!
(2016)
Guided sightseeing tours of the former township of Katutura have been offered in Windhoek since the mid-1990s. City tourism in the Namibian capital had thus become, at quite an early point in time, part of the trend towards utilising poor urban areas for purposes of tourism – a trend that set in at the beginning of the same decade. Frequently referred to as “slum tourism” or “poverty tourism”, the phenomenon of guided tours around places of poverty has not only been causing some media sensation and much public outrage since its emergence; in the past few years, it has developed into a vital field of scientific research, too. “Global Slumming” provides the grounds for a rethinking of the relationship between poverty and tourism in world society.
This book is the outcome of a study project of the Institute of Geography at the School of Cultural Studies and Social Science of the University of Osnabrueck, Germany. It represents the first empirical case study on township tourism in Namibia. It focuses on four aspects:
1. Emergence, development and (market) structure of township tourism in Windhoek
2. Expectations/imaginations, representations as well as perceptions of the township and its inhabitants from the tourist’s perspective
3. Perception and assessment of township tourism from the residents’ perspective
4. Local economic effects and the poverty-alleviating impact of township tourism
The aim is to make an empirical contribution to the discussion around the tourism-poverty nexus and to an understanding of the global phenomenon of urban poverty tourism.
In this work, three ligands produced from amino acids were synthesized and used to produce five bis- and PEPPSI-type palladium–NHC complexes using a novel synthesis route from sustainable starting materials. Three of these complexes were used as precatalysts in the aqueous-phase Suzuki–Miyaura coupling of various substrates displaying high activity. TEM and mercury poisoning experiments provide evidence for Pd-nanoparticle formation stabilized in water.
In the interest of producing functional catalysts from sustainable building-blocks, 1, 3-dicarboxylate imidazolium salts derived from amino acids were successfully modified to be suitable as N-Heterocyclic carbene (NHC) ligands within metal complexes. Complexes of Ag(I), Pd(II), and Ir(I) were successfully produced using known procedures using ligands derived from glycine, alanine, β-alanine and phenylalanine. The complexes were characterized in solid state using X-Ray crystallography, which allowed for the steric and electronic comparison of these ligands to well-known NHC ligands within analogous metal complexes.
The palladium complexes were tested as catalysts for aqueous-phase Suzuki-Miyaura cross-coupling. Water-solubility could be induced via ester hydrolysis of the N-bound groups in the presence of base. The mono-NHC–Pd complexes were seen to be highly active in the coupling of aryl bromides with phenylboronic acid; the active catalyst of which was determined to be mostly Pd(0) nanoparticles. Kinetic studies determined that reaction proceeds quickly in the coupling of bromoacetophenone, for both pre-hydrolyzed and in-situ hydrolysis catalyst dissolution. The catalyst could also be recycled for an extra run by simply re-using the aqueous layer.
The imidazolium salts were also used to produce organosilica hybrid materials. This was attempted via two methods: by post-grafting onto a commercial organosilica, and co-condensation of the corresponding organosilane. The co-condensation technique harbours potential for the production of solid-support catalysts.
About a quarter of anthropogenic CO2 emissions are currently taken up by the oceans, decreasing seawater pH. We performed a mesocosm experiment in the Baltic Sea in order to investigate the consequences of increasing CO2 levels on pelagic carbon fluxes. A gradient of different CO2 scenarios, ranging from ambient (similar to 370 mu atm) to high (similar to 1200 mu atm), were set up in mesocosm bags (similar to 55m(3)). We determined standing stocks and temporal changes of total particulate carbon (TPC), dissolved organic carbon (DOC), dissolved inorganic carbon (DIC), and particulate organic carbon (POC) of specific plankton groups. We also measured carbon flux via CO2 exchange with the atmosphere and sedimentation (export), and biological rate measurements of primary production, bacterial production, and total respiration. The experiment lasted for 44 days and was divided into three different phases (I: t0-t16; II: t17-t30; III: t31-t43). Pools of TPC, DOC, and DIC were approximately 420, 7200, and 25 200 mmol Cm-2 at the start of the experiment, and the initial CO2 additions increased the DIC pool by similar to 7% in the highest CO2 treatment. Overall, there was a decrease in TPC and increase of DOC over the course of the experiment. The decrease in TPC was lower, and increase in DOC higher, in treatments with added CO2. During phase I the estimated gross primary production (GPP) was similar to 100 mmol C m(-2) day(-1), from which 75-95% was respired, similar to 1% ended up in the TPC (including export), and 5-25% was added to the DOC pool. During phase II, the respiration loss increased to similar to 100% of GPP at the ambient CO2 concentration, whereas respiration was lower (85-95% of GPP) in the highest CO2 treatment. Bacterial production was similar to 30% lower, on average, at the highest CO2 concentration than in the controls during phases II and III. This resulted in a higher accumulation of DOC and lower reduction in the TPC pool in the elevated CO2 treatments at the end of phase II extending throughout phase III. The "extra" organic carbon at high CO2 remained fixed in an increasing biomass of small-sized plankton and in the DOC pool, and did not transfer into large, sinking aggregates. Our results revealed a clear effect of increasing CO2 on the carbon budget and mineralization, in particular under nutrient limited conditions. Lower carbon loss processes (respiration and bacterial remineralization) at elevated CO2 levels resulted in higher TPC and DOC pools than ambient CO2 concentration. These results highlight the importance of addressing not only net changes in carbon standing stocks but also carbon fluxes and budgets to better disentangle the effects of ocean acidification.
The hydrological budget of a region is determined based on the horizontal and vertical water fluxes acting in both inward and outward directions. These integrated water fluxes vary, altering the total water storage and consequently the gravitational force of the region. The time-dependent gravitational field can be observed through the Gravity Recovery and Climate Experiment (GRACE) gravimetric satellite mission, provided that the mass variation is above the sensitivity of GRACE. This study evaluates mass changes in prominent reservoir regions through three independent approaches viz. fluxes, storages, and gravity, by combining remote sensing products, in-situ data and hydrological model outputs using WaterGAP Global Hydrological Model (WGHM) and Global Land Data Assimilation System (GLDAS). The results show that the dynamics revealed by the GRACE signal can be better explored by a hybrid method, which combines remote sensing-based reservoir volume estimates with hydrological model outputs, than by exclusive model-based storage estimates. For the given arid/ semi-arid regions, GLDAS based storage estimations perform better than WGHM.
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
Ongoing climate change is known to cause an increase in the frequency and amplitude of local temperature and precipitation extremes in many regions of the Earth. While gradual changes in the climatological conditions have already been shown to strongly influence plant flowering dates, the question arises if and how extremes specifically impact the timing of this important phenological phase. Studying this question calls for the application of statistical methods that are tailored to the specific properties of event time series. Here, we employ event coincidence analysis, a novel statistical tool that allows assessing whether or not two types of events exhibit similar sequences of occurrences in order to systematically quantify simultaneities between meteorological extremes and the timing of the flowering of four shrub species across Germany. Our study confirms previous findings of experimental studies by highlighting the impact of early spring temperatures on the flowering of the investigated plants. However, previous studies solely based on correlation analysis do not allow deriving explicit estimates of the strength of such interdependencies without further assumptions, a gap that is closed by our analysis. In addition to direct impacts of extremely warm and cold spring temperatures, our analysis reveals statistically significant indications of an influence of temperature extremes in the autumn preceding the flowering.
Observed recent and expected future increases in frequency and intensity of climatic extremes in central Europe may pose critical challenges for domestic tree species. Continuous dendrometer recordings provide a valuable source of information on tree stem radius variations, offering the possibility to study a tree's response to environmental influences at a high temporal resolution. In this study, we analyze stem radius variations (SRV) of three domestic tree species (beech, oak, and pine) from 2012 to 2014. We use the novel statistical approach of event coincidence analysis (ECA) to investigate the simultaneous occurrence of extreme daily weather conditions and extreme SRVs, where extremes are defined with respect to the common values at a given phase of the annual growth period. Besides defining extreme events based on individual meteorological variables, we additionally introduce conditional and joint ECA as new multivariate extensions of the original methodology and apply them for testing 105 different combinations of variables regarding their impact on SRV extremes. Our results reveal a strong susceptibility of all three species to the extremes of several meteorological variables. Yet, the inter-species differences regarding their response to the meteorological extremes are comparatively low. The obtained results provide a thorough extension of previous correlation-based studies by emphasizing on the timings of climatic extremes only. We suggest that the employed methodological approach should be further promoted in forest research regarding the investigation of tree responses to changing environmental conditions.
We consider the Navier-Stokes equations in the layer R^n x [0,T] over R^n with finite T > 0. Using the standard fundamental solutions of the Laplace operator and the heat operator, we reduce the Navier-Stokes equations to a nonlinear Fredholm equation of the form (I+K) u = f, where K is a compact continuous operator in anisotropic normed Hölder spaces weighted at the point at infinity with respect to the space variables. Actually, the weight function is included to provide a finite energy estimate for solutions to the Navier-Stokes equations for all t in [0,T]. On using the particular properties of the de Rham complex we conclude that the Fréchet derivative (I+K)' is continuously invertible at each point of the Banach space under consideration and the map I+K is open and injective in the space. In this way the Navier-Stokes equations prove to induce an open one-to-one mapping in the scale of Hölder spaces.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
The excitation of localized surface plasmons in noble metal nanoparticles (NPs) results in different nanoscale effects such as electric field enhancement, the generation of hot electrons and a temperature increase close to the NP surface. These effects are typically exploited in diverse fields such as surface-enhanced Raman scattering (SERS), NP catalysis and photothermal therapy (PTT). Halogenated nucleobases are applied as radiosensitizers in conventional radiation cancer therapy due to their high reactivity towards secondary electrons. Here, we use SERS to study the transformation of 8-bromoadenine (8BrA) into adenine on the surface of Au and AgNPs upon irradiation with a low-power continuous wave laser at 532, 633 and 785 nm, respectively. The dissociation of 8BrA is ascribed to a hot-electron transfer reaction and the underlying kinetics are carefully explored. The reaction proceeds within seconds or even milliseconds. Similar dissociation reactions might also occur with other electrophilic molecules, which must be considered in the interpretation of respective SERS spectra. Furthermore, we suggest that hot-electron transfer induced dissociation of radiosensitizers such as 8BrA can be applied in the future in PTT to enhance the damage of tumor tissue upon irradiation.
Changes in free symptom attributions in hypochondriasis after cognitive therapy and exposure therapy
(2016)
Background: Cognitive-behavioural therapy can change dysfunctional symptom attributions in patients with hypochondriasis. Past research has used forced-choice answer formats, such as questionnaires, to assess these misattributions; however, with this approach, idiosyncratic attributions cannot be assessed. Free associations are an important complement to existing approaches that assess symptom attributions. Aims: With this study, we contribute to the current literature by using an open-response instrument to investigate changes in freely associated attributions after exposure therapy (ET) and cognitive therapy (CT) compared with a wait list (WL). Method: The current study is a re-examination of a formerly published randomized controlled trial (Weck, Neng, Richtberg, Jakob and Stangier, 2015) that investigated the effectiveness of CT and ET. Seventy-three patients with hypochondriasis were randomly assigned to CT, ET or a WL, and completed a 12-week treatment (or waiting period). Before and after the treatment or waiting period, patients completed an Attribution task in which they had to spontaneously attribute nine common bodily sensations to possible causes in an open-response format. Results: Compared with the WL, both CT and ET reduced the frequency of somatic attributions regarding severe diseases (CT: Hedges's g = 1.12; ET: Hedges's g = 1.03) and increased the frequency of normalizing attributions (CT: Hedges's g = 1.17; ET: Hedges's g = 1.24). Only CT changed the attributions regarding moderate diseases (Hedges's g = 0.69). Changes in somatic attributions regarding mild diseases and psychological attributions were not observed. Conclusions: Both CT and ET are effective for treating freely associated misattributions in patients with hypochondriasis. This study supplements research that used a forced-choice assessment.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
In Turkey, there is a shortage of studies on the prevalence of sexual aggression among young adults. The present study examined sexual aggression victimization and perpetration since the age of 15 in a convenience sample of N = 1,376 college students (886 women) from four public universities in Ankara, Turkey. Prevalence rates for different coercive strategies, victim-perpetrator constellations, and sexual acts were measured with a Turkish version of the Sexual Aggression and Victimization Scale (SAV-S). Overall, 77.6% of women and 65.5% of men reported at least one instance of sexual aggression victimization, and 28.9% of men and 14.2% of women reported at least one instance of sexual aggression perpetration. Prevalence rates of sexual aggression victimization and perpetration were highest for current or former partners, followed by acquaintances/friends and strangers. Alcohol was involved in a substantial proportion of the reported incidents. The findings are the first to provide systematic evidence on sexual aggression perpetration and victimization among college students in Turkey, including both women and men.
Sexual Aggression Victimization and Perpetration among Male and Female College Students in Chile
(2016)
Evidence on the prevalence of sexual aggression among college students is primarily based on studies from Western countries. In Chile, a South American country strongly influenced by the Catholic Church, little research on sexual aggression among college students is available. Therefore, the purpose of the present study was to examine the prevalence of sexual aggression victimization and perpetration since the age of 14 (the legal age of consent) in a sample of male and female students aged between 18 and 29 years from five Chilean universities (N = 1135), to consider possible gender differences, and to study the extent to which alcohol was involved in the reported incidents of perpetration and victimization. Sexual aggression victimization and perpetration was measured with a Chilean Spanish version of the Sexual Aggression and Victimization Scale (SAV-S), which includes three coercive strategies (use or threat of physical force, exploitation of an incapacitated state, and verbal pressure), three victim-perpetrator constellations (current or former partners, friends/acquaintances, and strangers), and four sexual acts (sexual touch, attempted sexual intercourse, completed sexual intercourse, and other sexual acts, such as oral sex). Overall, 51.9% of women and 48.0% of men reported at least one incident of sexual victimization, and 26.8% of men and 16.5% of women reported at least one incident of sexual aggression perpetration since the age of 14. For victimization, only few gender differences were found, but significantly more men than women reported sexual aggression perpetration. A large proportion of perpetrators also reported victimization experiences. Regarding victim-perpetrator relationship, sexual aggression victimization and perpetration were more common between persons who knew each other than between strangers. Alcohol use by the perpetrator, victim, or both was involved in many incidents of sexual aggression victimization and perpetration, particularly among strangers. The present data are the first to provide a systematic and detailed picture of sexual aggression among college students in Chile, including victimization and perpetration reports by both men and women and confirming the critical role of alcohol established in past research from Western countries.
This paper is focused on the temperature dependent synthesis of gold nanotriangles in a vesicular template phase, containing phosphatidylcholin and AOT, by adding the strongly alternating polyampholyte PalPhBisCarb.
UV-vis absorption spectra in combination with TEM micrographs show that flat gold nanoplatelets are formed predominantly in presence of the polyampholyte at 45 °C. The formation of triangular and hexagonal nanoplatelets can be directly influenced by the kinetic approach, i.e., by varying the polyampholyte dosage rate at 45 °C. Corresponding zeta potential measurements indicate that a temperature dependent adsorption of the polyampholyte on the {111} faces will induce the symmetry breaking effect, which is responsible for the kinetically controlled hindered vertical and preferred lateral growth of the nanoplatelets.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
In this thesis, the two prototype catalysts Fe(CO)₅ and Cr(CO)₆ are investigated with time-resolved photoelectron spectroscopy at a high harmonic setup. In both of these metal carbonyls, a UV photon can induce the dissociation of one or more ligands of the complex. The mechanism of the dissociation has been debated over the last decades. The electronic dynamics of the first dissociation occur on the femtosecond timescale.
For the experiment, an existing high harmonic setup was moved to a new location, was extended, and characterized. The modified setup can induce dynamics in gas phase samples with photon energies of 1.55eV, 3.10eV, and 4.65eV. The valence electronic structure of the samples can be probed with photon energies between 20eV and 40eV. The temporal resolution is 111fs to 262fs, depending on the combination of the two photon energies.
The electronically excited intermediates of the two complexes, as well as of the reaction product Fe(CO)₄, could be observed with photoelectron spectroscopy in the gas phase for the first time. However, photoelectron spectroscopy gives access only to the final ionic states. Corresponding calculations to simulate these spectra are still in development. The peak energies and their evolution in time with respect to the initiation pump pulse have been determined, these peaks have been assigned based on literature data. The spectra of the two complexes show clear differences. The dynamics have been interpreted with the assumption that the motion of peaks in the spectra relates to the movement of the wave packet in the multidimensional energy landscape. The results largely confirm existing models for the reaction pathways. In both metal carbonyls, this pathway involves a direct excitation of the wave packet to a metal-to-ligand charge transfer state and the subsequent crossing to a dissociative ligand field state. The coupling of the electronic dynamics to the nuclear dynamics could explain the slower dissociation in Fe(CO)₅ as compared to Cr(CO)₆.
It is the intention of this study to contribute to further rethinking and innovating in the Microcredit business which stands at a turning point – after around 40 years of practice it is endangered to fail as a tool for economic development and to become a doubtful finance product with a random scope instead. So far, a positive impact of Microfinance on the improvement of the lives of the poor could not be confirmed. Over-indebtment of borrowers due to the pre-dominance of consumption Microcredits has become a widespread problem. Furthermore, a rising number of abusive and commercially excessive practices have been reported.
In fact, the Microfinance sector appears to suffer from a major underlying deficit: there does not exist a coherent and transparent understanding of its meaning and objectives so that Microfinance providers worldwide follow their own approaches of Microfinance which tend to differ considerably from each other.
In this sense the study aims at consolidating the multi-faced and very often confusingly different Microcredit profiles that exist nowadays. Subsequently, in this study, the Microfinance spectrum will be narrowed to one clear-cut objective, in fact away from the mere monetary business transactions to poor people it has gradually been reduced to back towards a tool for economic development as originally envisaged by its pioneers.
Hence, the fundamental research question of this study is whether, and under which conditions, Microfinance may attain a positive economic impact leading to an improvement of the living of the poor.
The study is structured in five parts: the three main parts (II.-IV.) are surrounded by an introduction (I.) and conclusion (V.). In part II., the Microfinance sector is analysed critically aiming to identify the challenges persisting as well as their root causes. In the third part, a change to the macroeconomic perspective is undertaken in oder to learn about the potential and requirements of small-scale finance to enhance economic development, particularly within the economic context of less developed countries. By consolidating the insights gained in part IV., the elements of a new concept of Microfinance with the objecitve to achieve economic development of its borrowers are elaborated.
Microfinance is a rather sensitive business the great fundamental idea of which is easily corruptible and, additionally, the recipients of which are predestined victims of abuse due to their limited knowledge in finance. It therefore needs to be practiced responsibly, but also according to clear cut definitions of its meaning and objectives all institutions active in the sector should be devoted to comply with. This is especially relevant as the demand for Microfinance services is expected to rise further within the years coming. For example, the recent refugee migration movement towards Europe entails a vast potential for Microfinance to enable these people to make a new start into economic life. This goes to show that Microfinance may no longer mainly be associated with a less developed economic context, but that it will gain importance as a financial instrument in the developed economies, too.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
This doctoral thesis seeks to elaborate how Wittgenstein’s very sparse writings on ethics and ethical thought, together with his later work on the more general problem of normativity and his approach to philosophical problems as a whole, can be applied to contemporary meta-ethical debates about the nature of moral thought and language and the sources of moral obligation. I begin with a discussion of Wittgenstein’s early “Lecture on Ethics”, distinguishing the thesis of a strict fact/value dichotomy that Wittgenstein defends there from the related thesis that all ethical discourse is essentially and intentionally nonsensical, an attempt to go beyond the limits of sense. The first chapter discusses and defends Wittgenstein’s argument that moral valuation always goes beyond any ascertaining of fact; the second chapter seeks to draw out the valuable insights from Wittgenstein’s (early) insistence that value discourse is nonsensical while also arguing that this thesis is ultimately untenable and also incompatible with later Wittgensteinian understanding of language. On the basis of this discussion I then take up the writings of the American philosopher Cora Diamond, who has worked out an ethical approach in a very closely Wittgensteinian spirit, and show how this approach shares many of the valuable insights of the moral expressivism and constructivism of contemporary authors such as Blackburn and Korsgaard while suggesting a way to avoid some of the problems and limitations of their approaches. Subsequently I turn to a criticism of the attempts by Lovibond and McDowell to enlist Wittgenstein in the support for a non-naturalist moral realism. A concluding chapter treats the ways that a broadly Wittgensteinian conception expands the subject of metaethics itself by questioning the primacy of discursive argument in moral thought and of moral propositions as the basic units of moral belief.
The optical properties of semiconductor nanocrystals (SC NCs) are largely controlled by their size and surface chemistry, i.e., the chemical composition and thickness of inorganic passivation shells and the chemical nature and number of surface ligands as well as the strength of their bonds to surface atoms. The latter is particularly important for CdTe NCs, which – together with alloyed CdxHg1−xTe – are the only SC NCs that can be prepared in water in high quality without the need for an additional inorganic passivation shell. Aiming at a better understanding of the role of stabilizing ligands for the control of the application-relevant fluorescence features of SC NCs, we assessed the influence of two of the most commonly used monodentate thiol ligands, thioglycolic acid (TGA) and mercaptopropionic acid (MPA), on the colloidal stability, photoluminescence (PL) quantum yield (QY), and PL decay behavior of a set of CdTe NC colloids. As an indirect measure for the strength of the coordinative bond of the ligands to SC NC surface atoms, the influence of the pH (pD) and the concentration on the PL properties of these colloids was examined in water and D2O and compared to the results from previous dilution studies with a set of thiol-capped Cd1−xHgxTe SC NCs in D2O. As a prerequisite for these studies, the number of surface ligands was determined photometrically at different steps of purification after SC NC synthesis with Ellman's test. Our results demonstrate ligand control of the pH-dependent PL of these SC NCs, with MPA-stabilized CdTe NCs being less prone to luminescence quenching than TGA-capped ones. For both types of CdTe colloids, ligand desorption is more pronounced in H2O compared to D2O, underlining also the role of hydrogen bonding and solvent molecules.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
Mountain and upland regions provide a wide range of ecosystem services to residents and visitors. While ecosystem research in mountain regions is on the rise, the linkages between sociocultural benefits and ecological systems remain little explored. Mountainous regions close to urban areas provide numerous benefits to a large number of individuals, suggesting a high social value, particularly for cultural ecosystem services. We explored and compared visitors' valuation of ecosystem services in the Pentland Hills, an upland range close to the city of Edinburgh, Scotland, and urban green spaces within Edinburgh. Based on 715 responses to user surveys in both study areas, we identified intense use and high social value for both areas. Several ecosystem services were perceived as equally important in both areas, including many cultural ecosystem services. Significant differences were revealed in the value of physically using nature, which Pentland Hills users rated more highly than those in the urban green spaces, and of mitigation of pollutants and carbon sequestration, for which the urban green spaces were valued more highly. Major differences were further identified for preferences in future land management, with nature-oriented management preferred by about 57% of the interviewees in the Pentland Hills, compared to 31% in the urban parks. The study highlights the substantial value of upland areas in close vicinity to a city for physically using and experiencing nature, with a strong acceptance of nature conservation.
Ecosystem services have a significant impact on human wellbeing. While ecosystem services are frequently represented by monetary values, social values and underlying social benefits remain under explored. The purpose of this study is to assess whether and how social benefits have been explicitly addressed within socio-economic and socio-cultural ecosystem services research, ultimately allowing a better understanding between ecosystem services and human well-being. In this paper, we reviewed 115 international primary valuation studies and tested four hypotheses associated to the identification of social benefits of ecosystem services using logistic regressions. Tested hypotheses were that (1) social benefits are mostly derived in studies that assess cultural ecosystem services as opposed to other ecosystem service types, (2) there is a pattern of social benefits and certain cultural ecosystem services assessed simultaneously, (3) monetary valuation techniques go beyond expressing monetary values and convey social benefits, and (4) directly addressing stakeholder's views the consideration of social benefits in ecosystem service assessments. Our analysis revealed that (1) a variety of social benefits are valued in studies that assess either of the four ecosystem service types, (2) certain social benefits are likely to co-occur in combination with certain cultural ecosystem services, (3) of the studies that employed monetary valuation techniques, simulated market approaches overlapped most frequently with the assessment of social benefits and (4) studies that directly incorporate stakeholder's views were more likely to also assess social benefits. (C) 2016 Elsevier B.V. All rights reserved.
Udmurt as an OV language
(2016)
This is the first study to investigate Hubert Haider's (2000, 2010, 2013, 2014) proposed systematic differences between OV and VO language in a family other than Germanic. Its aim is to gather evidence on whether basic word order is predictive of further properties of a language. The languages under investigation are the Finno-Ugric languages Udmurt (as an OV language) and Finnish (as a VO language). Counter to Kayne (1994), Haider proposes that the structure of a sentence with a head-final VP is fundamentally different from that of a sentence with a head-initial VP, e.g., OV languages do not exhibit a VP-shell structure, and they do not employ a TP layer with a structural subject position. Haider's proposed structural differences are said to result in the following empirically testable differences:
(a) VP: the availability of VP-internal adverbial intervention and scrambling only in OV-VPs;
(b) subjects: the lack of certain subject-object asymmetries in OV languages, i.e., lack of the subject condition and lack of superiority effects;
(c) V-complexes: the availability of partial predicate fronting only in OV languages; different orderings between selecting and selected verbs; the intervention of non-verbal material between verbs only in VO languages;
(d) V-particles: differences in the distribution of resultative phrases and verb particles.
Udmurt and Finnish behave in line with Haider's predictions with regard to the status of the subject, with regard to the order of selecting and selected verbs, and with regard to the availability of partial predicate fronting. Moreover, Udmurt allows for adverbial intervention and scrambling, as predicted, whereas the status of these properties in Finnish could not be reliably determined due to obligatory V-to-T. There is also counterevidence to Haider's predictions: Udmurt allows for non-verbal material between verbs, and the distribution of resultative phrases and verb particles is essentially as free as the distribution of adverbial phrases in both Finno-Ugric languages. As such, Haider's theory is not falsified by the data from Udmurt and Finnish (except for his theory on verb particles), but it is also not fully supported by the data.
Robust appraisals of climate impacts at different levels of global-mean temperature increase are vital to guide assessments of dangerous anthropogenic interference with the climate system. The 2015 Paris Agreement includes a two-headed temperature goal: "holding the increase in the global average temperature to well below 2 degrees C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 degrees C". Despite the prominence of these two temperature limits, a comprehensive overview of the differences in climate impacts at these levels is still missing. Here we provide an assessment of key impacts of climate change at warming levels of 1.5 degrees C and 2 degrees C, including extreme weather events, water availability, agricultural yields, sea-level rise and risk of coral reef loss. Our results reveal substantial differences in impacts between a 1.5 degrees C and 2 degrees C warming that are highly relevant for the assessment of dangerous anthropogenic interference with the climate system. For heat-related extremes, the additional 0.5 degrees C increase in global-mean temperature marks the difference between events at the upper limit of present-day natural variability and a new climate regime, particularly in tropical regions. Similarly, this warming difference is likely to be decisive for the future of tropical coral reefs. In a scenario with an end-of-century warming of 2 degrees C, virtually all tropical coral reefs are projected to be at risk of severe degradation due to temperature-induced bleaching from 2050 onwards. This fraction is reduced to about 90% in 2050 and projected to decline to 70% by 2100 for a 1.5 degrees C scenario. Analyses of precipitation-related impacts reveal distinct regional differences and hot-spots of change emerge. Regional reduction in median water availability for the Mediterranean is found to nearly double from 9% to 17% between 1.5 degrees C and 2 degrees C, and the projected lengthening of regional dry spells increases from 7 to 11%. Projections for agricultural yields differ between crop types as well as world regions. While some (in particular high-latitude) regions may benefit, tropical regions like West Africa, South-East Asia, as well as Central and northern South America are projected to face substantial local yield reductions, particularly for wheat and maize. Best estimate sea-level rise projections based on two illustrative scenarios indicate a 50cm rise by 2100 relative to year 2000-levels for a 2 degrees C scenario, and about 10 cm lower levels for a 1.5 degrees C scenario. In a 1.5 degrees C scenario, the rate of sea-level rise in 2100 would be reduced by about 30% compared to a 2 degrees C scenario. Our findings highlight the importance of regional differentiation to assess both future climate risks and different vulnerabilities to incremental increases in global-mean temperature. The article provides a consistent and comprehensive assessment of existing projections and a good basis for future work on refining our understanding of the difference between impacts at 1.5 degrees C and 2 degrees C warming.
The aim of this work is the evaluation of the geothermal potential of Luxembourg. The approach consists in a joint interpretation of different types of information necessary for a first rather qualitative assessment of deep geothermal reservoirs in Luxembourg and the adjoining regions in the surrounding countries of Belgium, France and Germany. For the identification of geothermal reservoirs by exploration, geological, thermal, hydrogeological and structural data are necessary. Until recently, however, reliable information about the thermal field and the regional geology, and thus about potential geothermal reservoirs, was lacking. Before a proper evaluation of the geothermal potential can be performed, a comprehensive survey of the geology and an assessment of the thermal field are required.
As a first step, the geology and basin structure of the Mesozoic Trier–Luxembourg Basin (TLB) is reviewed and updated using recently published information on the geology and structures as well as borehole data available in Luxembourg and the adjoining regions. A Bouguer map is used to get insight in the depth, morphology and structures in the Variscan basement buried beneath the Trier–Luxembourg Basin. The geological section of the old Cessange borehole is reinterpreted and provides, in combination with the available borehole data, consistent information for the production of isopach maps. The latter visualize the synsedimentary evolution of the Trier–Luxembourg Basin. Complementary, basin-wide cross sections illustrate the evolution and structure of the Trier–Luxembourg Basin. The knowledge gained does not support the old concept of the Weilerbach Mulde. The basin-wide cross sections, as well as the structural and sedimentological observations in the Trier–Luxembourg Basin suggest that the latter probably formed above a zone of weakness related to a buried Rotliegend graben. The inferred graben structure designated by SE-Luxembourg Graben (SELG) is located in direct southwestern continuation of the Wittlicher Rotliegend-Senke.
The lack of deep boreholes and subsurface temperature prognosis at depth is circumnavigated by using thermal modelling for inferring the geothermal resource at depth. For this approach, profound structural, geological and petrophysical input data are required. Conceptual geological cross sections encompassing the entire crust are constructed and further simplified and extended to lithospheric scale for their utilization as thermal models. The 2-D steady state and conductive models are parameterized by means of measured petrophysical properties including thermal conductivity, radiogenic heat production and density. A surface heat flow of 75 ∓ 7 (2δ) mW m–2 for verification of the thermal models could be determined in the area. The models are further constrained by the geophysically-estimated depth of the lithosphere–asthenosphere boundary (LAB) defined by the 1300 °C isotherm. A LAB depth of 100 km, as seismically derived for the Ardennes, provides the best fit with the measured surface heat flow. The resulting mantle heat flow amounts to ∼40 mW m–2. Modelled temperatures are in the range of 120–125 °C at 5 km depth and of 600–650 °C at the crust/mantle discontinuity (Moho). Possible thermal consequences of the 10–20 Ma old Eifel plume, which apparently caused upwelling of the asthenospheric mantle to 50–60 km depth, were modelled in a steady-state thermal scenario resulting in a surface heat flow of at least 91 mW m–2 (for the plume top at 60 km) in the Eifel region. Available surface heat-flow values are significantly lower (65–80 mW m–2) and indicate that the plume-related heating has not yet entirely reached the surface.
Once conceptual geological models are established and the thermal regime is assessed, the geothermal potential of Luxembourg and the surrounding areas is evaluated by additional consideration of the hydrogeology, the stress field and tectonically active regions. On the one hand, low-enthalpy hydrothermal reservoirs in Mesozoic reservoirs in the Trier–Luxembourg Embayment (TLE) are considered. On the other hand, petrothermal reservoirs in the Lower Devonian basement of the Ardennes and Eifel regions are considered for exploitation by Enhanced/Engineered Geothermal Systems (EGS). Among the Mesozoic aquifers, the Buntsandstein aquifer characterized by temperatures of up to 50 °C is a suitable hydrothermal reservoir that may be exploited by means of heat pumps or provide direct heat for various applications. The most promising area is the zone of the SE–Luxembourg Graben. The aquifer is warmest underneath the upper Alzette River valley and the limestone plateau in Lorraine, where the Buntsandstein aquifer lies below a thick Mesozoic cover. At the base of an inferred Rotliegend graben in the same area, temperatures of up to 75 °C are expected. However, geological and hydraulic conditions are uncertain. In the Lower Devonian basement, thick sandstone-/quartzite-rich formations with temperatures >90 °C are expected at depths >3.5 km and likely offer the possibility of direct heat use. The setting of the Südeifel (South Eifel) region, including the Müllerthal region near Echternach, as a tectonically active zone may offer the possibility of deep hydrothermal reservoirs in the fractured Lower Devonian basement. Based on the recent findings about the structure of the Trier–Luxembourg Basin, the new concept presents the Müllerthal–Südeifel Depression (MSD) as a Cenozoic structure that remains tectonically active and subsiding, and therefore is relevant for geothermal exploration. Beyond direct use of geothermal heat, the expected modest temperatures at 5 km depth (about 120 °C) and increased permeability by EGS in the quartzite-rich Lochkovian could prospectively enable combined geothermal heat production and power generation in Luxembourg and the western realm of the Eifel region.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and [(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives. They can easily be synthesized in a few steps from commercially available glycals. As a result of the ring strain from the four-membered rings in (I) and (II), the conformations of the carbohydrates deviate strongly from the ideal chair form. Compound (II) occurs in the boat form. In the five-membered lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair conformation. As a result of the distortion of the sugar rings, the configurations of the three bicyclic carbohydrate derivatives could not be determined from their NMR coupling constants. From our three crystal structure determinations, we were able to establish for the first time the absolute configurations of all new stereocenters of the carbohydrate rings.
We present an X-ray-optical cross-correlator for the soft (> 150 eV) up to the hard X-ray regime based on a molybdenum-silicon superlattice. The cross-correlation is done by probing intensity and position changes of superlattice Bragg peaks caused by photoexcitation of coherent phonons. This approach is applicable for a wide range of X-ray photon energies as well as for a broad range of excitation wavelengths and requires no external fields or changes of temperature. Moreover, the cross-correlator can be employed on a 10 ps or 100 fs time scale featuring up to 50% total X-ray reflectivity and transient signal changes of more than 20%. (C) 2016 Author(s).
The main results of this thesis are formulated in a class of surfaces (varifolds) generalizing closed and connected smooth submanifolds of Euclidean space which allows singularities. Given an indecomposable varifold with dimension at least two in some Euclidean space such that the first variation is locally bounded, the total variation is absolutely continuous with respect to the weight measure, the density of the weight measure is at least one outside a set of weight measure zero and the generalized mean curvature is locally summable to a natural power (dimension of the varifold minus one) with respect to the weight measure. The thesis presents an improved estimate of the set where the lower density is small in terms of the one dimensional Hausdorff measure. Moreover, if the support of the weight measure is compact, then the intrinsic diameter with respect to the support of the weight measure is estimated in terms of the generalized mean curvature. This estimate is in analogy to the diameter control for closed connected manifolds smoothly immersed in some Euclidean space of Peter Topping. Previously, it was not known whether the hypothesis in this thesis implies that two points in the support of the weight measure have finite geodesic distance.
This study addressed the role of reading motivation as a potential determinant of losses or gains in reading competence over six weeks of summer vacation (SV). Based on a sample of 223 third-grade elementary students, structural equation analyses showed that intrinsic reading motivation before SV contributed positively to both word and sentence comprehension after SV when controlling for comprehension performance before SV. These effects were mediated by reading amount. Extrinsic reading motivation did not show significant associations with end-of-summer comprehension scores. Taken together, the findings suggest that intrinsic reading motivation facilitates students’ development of reading comprehension over SV.
This dissertation examines the impact of the type of referring expression on the acquisition of word order variation in German-speaking preschoolers. A puzzle in the area of language acquisition concerns the production-comprehension asymmetry for non-canonical sentences like "Den Affen fängt die Kuh." (“The monkey, the cow chases.”), that is, preschoolers usually have difficulties in accurately understanding non-canonical sentences approximately until age six (e.g., Dittmar et al., 2008) although they produce non-canonical sentences already around age three (e.g., Poeppel & Wexler, 1993; Weissenborn, 1990). This dissertation investigated the production and comprehension of non-canonical sentences to address this issue.
Three corpus analyses were conducted to investigate the impact of givenness, topic status and the type of referring expression on word order in the spontaneous speech of two- to four-year-olds and the child-directed speech produced by their mothers. The positioning of the direct object in ditransitive sentences was examined; in particular, sentences in which the direct object occurred before or after the indirect object in the sentence-medial positions and sentences in which it occurred in the sentence-initial position. The results reveal similar ordering patterns for children and adults. Word order variation was to a large extent predictable from the type of referring expression, especially with respect to the word order involving the sentence-medial positions. Information structure (e.g., topic status) had an additional impact only on word order variation that involved the sentence-initial position.
Two comprehension experiments were conducted to investigate whether the type of referring expression and topic status influences the comprehension of non-canonical transitive sentences in four- and five-year-olds. In the first experiment, the topic status of the one of the sentential arguments was established via a preceding context sentence, and in the second experiment, the type of referring expression for the sentential arguments was additionally manipulated by using either a full lexical noun phrase (NP) or a personal pronoun. The results demonstrate that children’s comprehension of non-canonical sentences improved when the topic argument was realized as a personal pronoun and this improvement was independent of the grammatical role of the arguments. However, children’s comprehension was not improved when the topic argument was realized as a lexical NP.
In sum, the results of both production and comprehension studies support the view that referring expressions may be seen as a sentence-level cue to word order and to the information status of the sentential arguments. The results highlight the important role of the type of referring expression on the acquisition of word order variation and indicate that the production-comprehension asymmetry is reduced when the type of referring expression is considered.
The enormous species richness of flowering plants is at least partly due to floral diversification driven by interactions between plants and their animal pollinators [1, 2]. Specific pollinator attraction relies on visual and olfactory floral cues [3-5]; floral scent can not only attract pollinators but also attract or repel herbivorous insects [6-8]. However, despite its central role for plant-animal interactions, the genetic control of floral scent production and its evolutionary modification remain incompletely understood [9-13]. Benzenoids are an important class of floral scent compounds that are generated from phenylalanine via several enzymatic pathways [14-17]. Here we address the genetic basis of the loss of floral scent associated with the transition from outbreeding to selfing in the genus Capsella. While the outbreeding C. grandiflora emits benzaldehyde as a major constituent of its floral scent, this has been lost in the selfing C. rubella. We identify the Capsella CNL1 gene encoding cinnamate: CoA ligase as responsible for this variation. Population genetic analysis indicates that CNL1 has been inactivated twice independently in C. rubella via different novel mutations to its coding sequence. Together with a recent study in Petunia [18], this identifies cinnamate: CoA ligase as an evolutionary hotspot for mutations causing the loss of benzenoid scent compounds in association with a shift in the reproductive strategy of Capsella from pollination by insects to self-fertilization.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
Introduction: Adequate cognitive function in patients is a prerequisite for successful implementation of patient education and lifestyle coping in comprehensive cardiac rehabilitation (CR) programs. Although the association between cardiovascular diseases and cognitive impairments (CIs) is well known, the prevalence particularly of mild CI in CR and the characteristics of affected patients have been insufficiently investigated so far.
Methods: In this prospective observational study, 496 patients (54.5 ± 6.2 years, 79.8% men) with coronary artery disease following an acute coronary event (ACE) were analyzed. Patients were enrolled within 14 days of discharge from the hospital in a 3-week inpatient CR program. Patients were tested for CI using the Montreal Cognitive Assessment (MoCA) upon admission to and discharge from CR. Additionally, sociodemographic, clinical, and physiological variables were documented. The data were analyzed descriptively and in a multivariate stepwise backward elimination regression model with respect to CI.
Results: At admission to CR, the CI (MoCA score < 26) was determined in 182 patients (36.7%). Significant differences between CI and no CI groups were identified, and CI group was associated with high prevalence of smoking (65.9 vs 56.7%, P = 0.046), heavy (physically demanding) workloads (26.4 vs 17.8%, P < 0.001), sick leave longer than 1 month prior to CR (28.6 vs 18.5%, P = 0.026), reduced exercise capacity (102.5 vs 118.8 W, P = 0.006), and a shorter 6-min walking distance (401.7 vs 421.3 m, P = 0.021) compared to no CI group. The age- and education-adjusted model showed positive associations with CI only for sick leave more than 1 month prior to ACE (odds ratio [OR] 1.673, 95% confidence interval 1.07–2.79; P = 0.03) and heavy workloads (OR 2.18, 95% confidence interval 1.42–3.36; P < 0.01).
Conclusion: The prevalence of CI in CR was considerably high, affecting more than one-third of cardiac patients. Besides age and education level, CI was associated with heavy workloads and a longer sick leave before ACE.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
This article explores a recent performance of excerpts from T.S. Eliot’s Four Quartets (1935/36–1942) entitled Engaging Eliot: Four Quartets in Word, Color, and Sound as an example of live poetry. In this context, Eliot’s poem can be analysed as an auditory artefact that interacts strongly with other oral performances (welcome addresses and artists’ conversations), as well as with the musical performance of Christopher Theofanidis’s quintet “At the Still Point” at the end of the opening of Engaging Eliot. The event served as an introduction to a 13-day art exhibition and engaged in a re-evaluation of Eliot’s poem after 9/11: while its first part emphasises the connection between Eliot’s poem and Christian doctrine, its second part – especially the combination of poetry reading and musical performance – highlights the philosophical and spiritual dimensions of Four Quartets.
The gravitational field of a laser pulse of finite lifetime, is investigated in the framework of linearized gravity. Although the effects are very small, they may be of fundamental physical interest. It is shown that the gravitational field of a linearly polarized light pulse is modulated as the norm of the corresponding electric field strength, while no modulations arise for circular polarization. In general, the gravitational field is independent of the polarization direction. It is shown that all physical effects are confined to spherical shells expanding with the speed of light, and that these shells are imprints of the spacetime events representing emission and absorption of the pulse. Nearby test particles at rest are attracted towards the pulse trajectory by the gravitational field due to the emission of the pulse, and they are repelled from the pulse trajectory by the gravitational field due to its absorption. Examples are given for the size of the attractive effect. It is recovered that massless test particles do not experience any physical effect if they are co-propagating with the pulse, and that the acceleration of massless test particles counter-propagating with respect to the pulse is four times stronger than for massive particles at rest. The similarities between the gravitational effect of a laser pulse and Newtonian gravity in two dimensions are pointed out. The spacetime curvature close to the pulse is compared to that induced by gravitational waves from astronomical sources.
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
Age of acquisition (AOA) is a psycholinguistic variable that significantly influences behavioural measures (response times and accuracy rates) in tasks that require lexical and semantic processing. Its origin is – unlike the origin of semantic typicality (TYP), which is assumed at the semantic level – controversially discussed. Different theories propose AOA effects to originate either at the semantic level or at the link between semantics and phonology (lemma-level).
The dissertation aims at investigating the influence of AOA and its interdependence with the semantic variable TYP on particularly semantic processing in order to pinpoint the origin of AOA effects. Therefore, three studies have been conducted that considered the variables AOA and TYP in semantic processing tasks (category verifications and animacy decisions) by means of behavioural and partly electrophysiological (ERP) data and in different populations (healthy young and elderly participants and in semantically impaired individuals with aphasia (IWA)).
The behavioural and electrophysiological data of the three studies provide evidence for distinct processing levels of the variables AOA and TYP. The data further support previous assumptions on a semantic origin for TYP but question the same for AOA. The findings, however, support an origin of AOA effects at the transition between the word form (phonology) and the semantic level that can be captured at the behavioural but not at the electrophysiological level.
Background: The engagement in aggressive behavior in middle childhood is linked to the development of severe problems in later life. Thus, identifying factors and processes that con-tribute to the continuity and increase of aggression in middle childhood is essential in order to facilitate the development of intervention programs. The present PhD thesis aimed at expand-ing the understanding of the development of aggression in middle childhood by examining risk factors in the intrapersonal and interpersonal domains as well as the interplay between these factors: Maladaptive anger regulation was examined as an intrapersonal risk factor; processes that occur in the peer context (social rejection and peer socialization) were included as interpersonal risk factors. In addition, in order to facilitate the in situ assessment of anger regulation strategies, an observational measure of anger regulation was developed and validated.
Method: The research aims were addressed within the scope of four articles. Data from two measurement time points about ten months apart were available for the analyses. Participants were elementary school children aged from 6 to 10 years at T1 and 7 to 11 years at T2. The first article was based on cross-sectional analyses including only the first time point; in the remaining three articles longitudinal associations across the two time points were analyzed. The first two articles were concerned with the development and cross-sectional as well as longitudinal validation of observational measure of anger regulation in middle childhood in a sample of 599 children. Using the same sample, the third article investigated the longitudinal link between maladaptive anger regulation and aggression considering social rejection as a mediating variable. The frequency as well as different functions of aggression (reactive and proactive) were included as outcomes measures. The fourth article examined the influence of class-level aggression on the development of different forms of aggression (relational and physical) over time under consideration of differences in initial individual aggression in a sample of 1,284 children. In addition, it was analyzed if the path from aggression to social rejection varies as a function of class-level aggression.
Results: The first two articles revealed that the observational measure of anger regulation developed for the purpose of this research was cross-sectionally related to anger reactivity, aggression and social rejection as well as longitudinally related to self-reported anger regula-tion. In the third article it was found that T1 maladaptive anger regulation showed no direct link to T2 aggression, but an indirect link through T1 social rejection. This indirect link was found for the frequency of aggression as well as for reactive and proactive aggression. The fourth article revealed that with regard to relational aggression, a high level of classroom ag-gression predicted an increase of individual aggression only among children with initially low levels of aggression. For physical aggression, it was found that the overall level of aggression in the class affected all children equally. In addition, physical aggression increased the likelihood of social rejection irrespective of the class-level of aggression whereas relational aggression caused social rejection only in classes with a generally low level of relational aggression. The analyses of gender-specific effects showed that children were mainly influenced by their same-gender peers and that the effect on the opposite gender was higher if children engaged in gender-atypical forms of aggressive behavior.
Conclusion: The results provided evidence for the construct and criterion validity of the observational measure of maladaptive anger regulation that was developed within the scope of this research. Furthermore, the findings indicated that maladaptive anger regulation constitutes an important risk factor of aggression through the influence of social rejection. Finally, the results demonstrated that the level of aggression among classmates is relevant for the development of individual aggression over time and that the children´s evaluation of relationally aggressive behavior varies as a function of the normativity of relational aggression in the class. The study findings have implications for the measurement of anger regulation in middle childhood as well as for the prevention of aggression and social rejection.
Background
Antiphospholipid antibodies (aPL) can be detected in asymptomatic carriers and infectious patients. The aim was to investigate whether a novel line immunoassay (LIA) differentiates between antiphospholipid syndrome (APS) and asymptomatic aPL+ carriers or patients with infectious diseases (infectious diseases controls (IDC)).
Methods
Sixty-one patients with APS (56 primary, 22/56 with obstetric events only, and 5 secondary), 146 controls including 24 aPL+ asymptomatic carriers and 73 IDC were tested on a novel hydrophobic solid phase coated with cardiolipin (CL), phosphatic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylglycerol, phosphatidylinositol, phosphatidylserine, beta2-glycoprotein I (β2GPI), prothrombin, and annexin V. Samples were also tested by anti-CL and anti-β2GPI ELISAs and for lupus anticoagulant activity. Human monoclonal antibodies (humoAbs) against human β2GPI or PL alone were tested on the same LIA substrates in the absence or presence of human serum, purified human β2GPI or after CL-micelle absorption.
Results
Comparison of LIA with the aPL-classification assays revealed good agreement for IgG/IgM aß2GPI and aCL. Anti-CL and anti-ß2GPI IgG/IgM reactivity assessed by LIA was significantly higher in patients with APS versus healthy controls and IDCs, as detected by ELISA. IgG binding to CL and ß2GPI in the LIA was significantly lower in aPL+ carriers and Venereal Disease Research Laboratory test (VDRL) + samples than in patients with APS. HumoAb against domain 1 recognized β2GPI bound to the LIA-matrix and in anionic phospholipid (PL) complexes. Absorption with CL micelles abolished the reactivity of a PL-specific humoAb but did not affect the binding of anti-β2GPI humoAbs.
Conclusions
The LIA and ELISA have good agreement in detecting aPL in APS, but the LIA differentiates patients with APS from infectious patients and asymptomatic carriers, likely through the exposure of domain 1.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.
The aim of this thesis was the elucidation of different ionization methods (resonance-enhanced multiphoton ionization – REMPI, electrospray ionization – ESI, atmospheric pressure chemical ionization – APCI) in ion mobility (IM) spectrometry. In order to gain a better understanding of the ionization processes, several spectroscopic, mass spectrometric and theoretical methods were also used. Another focus was the development of experimental techniques, including a high resolution spectrograph and various combinations of IM and mass spectrometry.
The novel high resolution 2D spectrograph facilitates spectroscopic resolutions in the range of commercial echelle spectrographs. The lowest full width at half maximum of a peak achieved was 25 pm. The 2D spectrograph is based on the wavelength separation of light by the combination of a prism and a grating in one dimension, and an etalon in the second dimension. This instrument was successfully employed for the acquisition of Raman and laser-induced breakdown spectra.
Different spectroscopic methods (light scattering and fluorescence spectroscopy) permitting a spatial as well as spectral resolution, were used to investigate the release of ions in the electrospray. The investigation is based on the 50 nm shift of the fluorescence band of rhodamine 6G ions of during the transfer from the electrospray droplets to the gas phase.
A newly developed ionization chamber operating at reduced pressure (0.5 mbar) was coupled to a time-of-flight mass spectrometer. After REMPI of H2S, an ionization chemistry analogous to H2O was observed with this instrument. Besides H2S+ and its fragments, H3S+ and protonated analyte ions could be observed as a result of proton-transfer reactions.
For the elucidation of the peaks in IM spectra, a combination of IM spectrometer and linear quadrupole ion trap mass spectrometer was developed. The instrument can be equipped with various ionization sources (ESI, REMPI, APCI) and was used for the characterization of the peptide bradykinin and the neuroleptic promazine.
The ionization of explosive compounds in an APCI source based on soft x-radiation was investigated in a newly developed ionization chamber attached to the ion trap mass spectrometer. The major primary and secondary reactions could be characterized and explosive compound ions could be identified and assigned to the peaks in IM spectra. The assignment is based on the comparison of experimentally determined and calculated IM. The methods of calculation currently available exhibit large deviations, especially in the case of anions. Therefore, on the basis of an assessment of available methods, a novel hybrid method was developed and characterized.