Refine
Has Fulltext
- yes (426) (remove)
Year of publication
- 2015 (426) (remove)
Document Type
- Postprint (188)
- Article (131)
- Doctoral Thesis (72)
- Monograph/Edited Volume (12)
- Preprint (12)
- Conference Proceeding (4)
- Master's Thesis (2)
- Part of Periodical (2)
- Bachelor Thesis (1)
- Habilitation Thesis (1)
Language
- English (426) (remove)
Keywords
- Computer Science Education (5)
- climate change (5)
- interference (5)
- German (4)
- climate-change (4)
- embodied cognition (4)
- evolution (4)
- model (4)
- variability (4)
- Competence Measurement (3)
Institute
- Institut für Physik und Astronomie (108)
- Mathematisch-Naturwissenschaftliche Fakultät (70)
- Institut für Informatik und Computational Science (36)
- Institut für Chemie (35)
- Humanwissenschaftliche Fakultät (34)
- Institut für Geowissenschaften (27)
- Institut für Biochemie und Biologie (18)
- Institut für Mathematik (17)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (14)
- Department Linguistik (10)
This article investigates a public debate in Germany that put a special spotlight on the interaction of standard language ideologies with social dichotomies, centering on the question of whether Kiezdeutsch, a new way of speaking in multilingual urban neighbourhoods, is a legitimate German dialect. Based on a corpus of emails and postings to media websites, I analyse central topoi in this debate and an underlying narrative on language and identity. Central elements of this narrative are claims of cultural elevation and cultural unity for an idealised standard language High German', a view of German dialects as part of a national folk culture, and the construction of an exclusive in-group of German' speakers who own this language and its dialects. The narrative provides a potent conceptual frame for the Othering of Kiezdeutsch and its speakers, and for the projection of social and sometimes racist deliminations onto the linguistic plane.
We present results of full 3D hydrodynamical and radiative transfer simulations of the colliding stellar winds in the massive binary system η Carinae. We accomplish this by applying the SimpleX algorithm for 3D radiative transfer on an unstructured Voronoi-Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the binary colliding winds. We use SimpleX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We investigate several computational domain sizes and Luminous Blue Variable primary star mass-loss rates. We furthermore present new methods of visualizing and interacting with output from complex 3D numerical simulations, including 3D interactive graphics and 3D printing. While we initially focus on η Car, the methods employed can be applied to numerous other colliding wind (WR 140, WR 137, WR 19) and dusty `pinwheel' (WR 104, WR 98a) binary systems. Coupled with 3D hydrodynamical simulations, SimpleX simulations have the potential to help determine the regions where various observed time-variable emission and absorption lines form in these unique objects.
We present 3D numerical simulations of the NGC6888 nebula considering the proper motion and the evolution of the star, from the red supergiant (RSG) to the Wolf-Rayet (WR) phase. Our simulations reproduce the limb-brightened morphology observed in [OIII] and X-ray emission maps. The synthetic maps computed by the numerical simulations show filamentary and clumpy structures produced by instabilities triggered in the interaction between the WR wind and the RSG shell.
The main objective of this work is to investigate the evolution of massive stars, and the interplay between them and the ionized gas for a sample of local metal-poor Wolf-Rayet galaxies.
Optical integral field spectrocopy was used in combination with multi-wavelength radio data.
Combining optical and radio data, we locate Wolf-Rayet stars and supernova remnants across the Wolf-Rayet galaxies to study the spatial correlation between them. This study will shed light on the massive star formation and its feedback, and will help us to better understand
distant star-forming galaxies.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
We analyse whether a stellar atmosphere model computed with the code CMFGEN provides an optimal description of the stellar observations of WR 136 and simultaneously reproduces the nebular observations of NGC 6888, such as the ionization degree, which is modelled with the pyCloudy code. All the observational material available (far and near UV and optical spectra) were used to constrain such models. We found that the stellar temperature T∗, at τ = 20, can be in a range between 70 000 and 110 000 K, but when using the nebula as an additional restriction, we found that the stellar models with T∗ ∼ 70 000 K represent the best solution for both, the star and the nebula.
A flexible approach to assess fluorescence decay functions in complex energy transfer systems
(2015)
Background: Time-correlated Forster resonance energy transfer (FRET) probes molecular distances with greater accuracy than intensity-based calculation of FRET efficiency and provides a powerful tool to study biomolecular structure and dynamics. Moreover, time-correlated photon count measurements bear additional information on the variety of donor surroundings allowing more detailed differentiation between distinct structural geometries which are typically inaccessible to general fitting solutions.
Results: Here we develop a new approach based on Monte Carlo simulations of time-correlated FRET events to estimate the time-correlated single photon counts (TCSPC) histograms in complex systems. This simulation solution assesses the full statistics of time-correlated photon counts and distance distributions of fluorescently labeled biomolecules. The simulations are consistent with the theoretical predictions of the dye behavior in FRET systems with defined dye distances and measurements of randomly distributed dye solutions. We validate the simulation results using a highly heterogeneous aggregation system and explore the conditions to use this tool in complex systems.
Conclusion: This approach is powerful in distinguishing distance distributions in a wide variety of experimental setups, thus providing a versatile tool to accurately distinguish between different structural assemblies in highly complex systems.
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making.
Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop-and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
Background: Medical training is very demanding and associated with a high prevalence of psychological distress. Compared to the general population, medical students are at a greater risk of developing a psychological disorder. Various attempts of stress management training in medical school have achieved positive results on minimizing psychological distress; however, there are often limitations. Therefore, the use of a rigorous scientific method is needed. The present study protocol describes a randomized controlled trial to examine the effectiveness of a specifically developed mindfulness-based stress prevention training for medical students that includes selected elements of cognitive behavioral strategies (MediMind).
Methods/Design: This study protocol presents a prospective randomized controlled trial, involving four assessment time points: baseline, post-intervention, one-year follow-up and five-year follow-up. The aims include evaluating the effect on stress, coping, psychological morbidity and personality traits with validated measures. Participants are allocated randomly to one of three conditions: MediMind, Autogenic Training or control group. Eligible participants are medical or dental students in the second or eighth semester of a German university. They form a population of approximately 420 students in each academic term. A final total sample size of 126 (at five-year follow-up) is targeted. The trainings (MediMind and Autogenic Training) comprise five weekly sessions lasting 90 minutes each. MediMind will be offered to participants of the control group once the five-year follow-up is completed. The allotment is randomized with a stratified allocation ratio by course of studies, semester, and gender. After descriptive statistics have been evaluated, inferential statistical analysis will be carried out with a repeated measures ANOVA-design with interactions between time and group. Effect sizes will be calculated using partial η-square values.
Discussion: Potential limitations of this study are voluntary participation and the risk of attrition, especially concerning participants that are allocated to the control group. Strengths are the study design, namely random allocation, follow-up assessment, the use of control groups and inclusion of participants at different stages of medical training with the possibility of differential analysis.
A multi-reference study of the byproduct formation for a ring-closed dithienylethene photoswitch
(2015)
Photodriven molecular switches are sometimes hindered in their performance by forming byproducts which act as dead ends in sequences of switching cycles, leading to rapid fatigue effects. Understanding the reaction pathways to unwanted byproducts is a prerequisite for preventing them. This article presents a study of the photochemical reaction pathways for byproduct formation in the photochromic switch 1,2-bis-(3-thienyl)-ethene. Specifically, using single- and multi-reference methods the post-deexcitation reaction towards the byproduct in the electronic ground state S0 when starting from the S1–S0 conical intersection (CoIn), is considered in detail. We find an unusual low-energy pathway, which offers the possibility for the formation of a dyotropic byproduct. Several high-energy pathways can be excluded with high probability.
Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic
force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading
procedure seems especially attractive when using target substances featuring high
costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient
frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.
We suggest several ideas which when combined could lead to a new mechanism for long-term pulsations of very hot and luminous stars. These involve the interplay between convection, radiation, atmospheric clumping and winds, which collectively feed back to stellar expansion and contraction. We discuss these ideas and point out the future work required in order to fill in the blanks.
We consider a Cauchy problem for the heat equation in a cylinder X x (0,T) over a domain X in the n-dimensional space with data on a strip lying on the lateral surface. The strip is of the form
S x (0,T), where S is an open subset of the boundary of X. The problem is ill-posed. Under natural restrictions on the configuration of S we derive an explicit formula for solutions of this problem.
We present the first physical characterization of the young open cluster VVVCL041. We spectroscopically observed the cluster main-sequence stellar population and a very-massive star candidate: WR62-2. CMFGEN modelling to our near-infrared spectra indicates that WR62-2 is a very luminous (10^6.4±0.2 L⊙)and massive (∼ 80M⊙) star.
Let A be a nonlinear differential operator on an open set X in R^n and S a closed subset of X. Given a class F of functions in X, the set S is said to be removable for F relative to A if any weak solution of A (u) = 0 in the complement of S of class F satisfies this equation weakly in all of X. For the most extensively studied classes F we show conditions on S which guarantee that S is removable for F relative to A.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
Abstract gringo
(2015)
This paper defines the syntax and semantics of the input language of the ASP grounder gringo. The definition covers several constructs that were not discussed in earlier work on the semantics of that language, including intervals, pools, division of integers, aggregates with non-numeric values, and lparse-style aggregate expressions. The definition is abstract in the sense that it disregards some details related to representing programs by strings of ASCII characters. It serves as a specification for gringo from Version 4.5 on.
We present results from our near-infrared spectroscopy with VLT/ISAAC of four, massive eclipsing binary systems in the young, heavily reddened, massive Danks clusters. We derive accurate fundamental parameters and the distance to these massive systems, which comprise of OIf+, WR and O-type stars. Our goal is to increase the sample of well-studied WR stars and constrain their physics by comparison with evolutionary models.
Assumed comparable environmental conditions of early Mars and early Earth in 3.7 Ga ago – at a time when first fossil records of life on Earth could be found – suggest the possibility of life emerging on both planets in parallel. As conditions changed, the hypothetical life on Mars either became extinct or was able to adapt and might still exist in biological niches. The controversial discussed detection of methane on Mars led to the assumption, that it must have a recent origin – either abiotic through active volcanism or chemical processes, or through biogenic production. Spatial and seasonal variations in the detected methane concentrations and correlations between the presence of water vapor and geological features such as subsurface hydrogen, which are occurring together with locally increased detected concentrations of methane, gave fuel to the hypothesis of a possible biological source of the methane on Mars.
Therefore the phylogenetically old methanogenic archaea, which have evolved under early Earth conditions, are often used as model-organisms in astrobiological studies to investigate the potential of life to exist in possible extraterrestrial habitats on our neighboring planet. In this thesis methanogenic archaea originating from two extreme environments on Earth were investigated to test their ability to be active under simulated Mars analog conditions. These extreme environments – the Siberian permafrost-affected soil and the chemoautotrophically based terrestrial ecosystem of Movile cave, Romania – are regarded as analogs for possible Martian (subsurface) habitats. Two novel species of methanogenic archaea isolated from these environments were described within the frame of this thesis.
It could be shown that concentrations up to 1 wt% of Mars regolith analogs added to the growth media had a positive influence on the methane production rates of the tested methanogenic archaea, whereas higher concentrations resulted in decreasing rates. Nevertheless it was possible for the organisms to metabolize when incubated on water-saturated soil matrixes made of Mars regolith analogs without any additional nutrients. Long-term desiccation resistance of more than 400 days was proven with reincubation and indirect counting of viable cells through a combined treatment with propidium monoazide (to inactivate DNA of destroyed cells) and quantitative PCR. Phyllosilicate rich regolith analogs seem to be the best soil mixtures for the tested methanogenic archaea to be active under Mars analog conditions. Furthermore, in a simulation chamber experiment the activity of the permafrost methanogen strain Methanosarcina soligelidi SMA-21 under Mars subsurface analog conditions could be proven. Through real-time wavelength modulation spectroscopy measurements the increase in the methane concentration at temperatures down to -5 °C could be detected.
The results presented in this thesis contribute to the understanding of the activity potential of methanogenic archaea under Mars analog conditions and therefore provide insights to the possible habitability of present-day Mars (near) subsurface environments. Thus, it contributes also to the data interpretation of future life detection missions on that planet. For example the ExoMars mission of the European Space Agency (ESA) and Roscosmos which is planned to be launched in 2018 and is aiming to drill in the Martian subsurface.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
When it comes to footnotes, Alexander von Humboldt was ahead of his times even though his references leave much to be desired by today’s academic standards. This article examines the footnotes of Humboldt’s Essai politique sur l‘île de Cuba (1826). While it is not always easy to decipher his sometimes cryptic references, the undertaking is worthwhile: Humboldt’s footnotes do not only reveal his vast networks of knowledge. They also provide glimpses of ongoing, contemporary disputes among different scholars that involve Humboldt’s writings. They also present Humboldt’s reactions to such disputes. Exploring Humboldt’s footnotes consequently allows the reader to access both Humboldt the scholar and Humboldt the human being.
Inventories of individually delineated landslides are a key to understanding landslide physics and mitigating their impact. They permit assessment of area–frequency distributions and landslide volumes, and testing of statistical correlations between landslides and physical parameters such as topographic gradient or seismic strong motion. Amalgamation, i.e. the mapping of several adjacent landslides as a single polygon, can lead to potentially severe distortion of the statistics of these inventories. This problem can be especially severe in data sets produced by automated mapping. We present five inventories of earthquake-induced landslides mapped with different materials and techniques and affected by varying degrees of amalgamation. Errors on the total landslide volume and power-law exponent of the area–frequency distribution, resulting from amalgamation, may be up to 200 and 50%, respectively. We present an algorithm based on image and digital elevation model (DEM) analysis, for automatic identification of amalgamated polygons. On a set of about 2000 polygons larger than 1000 m2, tracing landslides triggered by the 1994 Northridge earthquake, the algorithm performs well, with only 2.7–3.6% incorrectly amalgamated landslides missed and 3.9–4.8% correct polygons incorrectly identified as amalgams. This algorithm can be used broadly to check landslide inventories and allow faster correction by automating the identification of amalgamation.
Sentences with doubly center-embedded relative clauses in which a verb phrase (VP) is missing are sometimes perceived as grammatical, thus giving rise to an illusion of grammaticality. In this paper, we provide a new account of why missing-VP sentences, which are both complex and ungrammatical, lead to an illusion of grammaticality, the so-called missing-VP effect. We propose that the missing-VP effect in particular, and processing difficulties with multiply center-embedded clauses more generally, are best understood as resulting from interference during cue-based retrieval. When processing a sentence with double center-embedding, a retrieval error due to interference can cause the verb of an embedded clause to be erroneously attached into a higher clause. This can lead to an illusion of grammaticality in the case of missing-VP sentences and to processing complexity in the case of complete sentences with double center-embedding. Evidence for an interference account of the missing-VP effect comes from experiments that have investigated the missing-VP effect in German using a speeded grammaticality judgments procedure. We review this evidence and then present two new experiments that show that the missing-VP effect can be found in German also with less restricting procedures. One experiment was a questionnaire study which required grammaticality judgments from participants without imposing any time constraints. The second experiment used a self-paced reading procedure and did not require any judgments. Both experiments confirm the prior findings of missing-VP effects in German and also show that the missing-VP effect is subject to a primacy effect as known from the memory literature. Based on this evidence, we argue that an account of missing-VP effects in terms of interference during cue-based retrieval is superior to accounts in terms of limited memory resources or in terms of experience with embedded structures.
New V-shaped non-centrosymmetric dyes, possessing a strongly electron-deficient azacyanine core, have been synthesized based on a straightforward two-step approach. The key step in this synthesis involves palladium-catalysed cross-coupling of dibromo-N,N′-methylene-2,2′-azapyridinocyanines with arylacetylenes. The resulting strongly polarized π-expanded heterocycles exhibit green to orange fluorescence and they strongly respond to changes in solvent polarity. We demonstrate that differently electron-donating peripheral groups have a significant influence on the internal charge transfer, hence on the solvent effect and fluorescence quantum yield. TD-DFT calculations confirm that, in contrast to the previously studied bis(styryl)azacyanines, the proximity of S1 and T2 states calculated for compounds bearing two 4-N,N-dimethylaminophenylethynyl moieties establishes good conditions for efficient intersystem crossing and is responsible for its low fluorescence quantum yield. Non-linear properties have also been determined for new azacyanines and the results show that depending on peripheral groups, the synthesized dyes exhibit small to large two-photon absorption cross sections reaching 4000 GM.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.
Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.
To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.
We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.
We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
NaYF4:Yb:Er nanoparticles (UCNP) were synthesized under mild experimental conditions to obtain a pure cubic lattice. Upon annealing at different temperatures up to Tan = 700 °C phase transitions to the hexagonal phase and back to the cubic phase were induced. The UCNP materials obtained for different Tan were characterized with respect to the lattice phase using standard XRD and Raman spectroscopy as well as steady state and time resolved upconversion luminescence. The standard techniques showed that for the annealing temperature range 300 °C < Tan < 600 °C the hexagonal lattice phase was dominant. For Tan < 300 °C hardly any change in the lattice phase could be deduced, whereas for Tan > 600 °C a back transfer to the α-phase was observed. Complementarily, the luminescence upconversion properties of the annealed UCNP materials were characterized in steady state and time resolved luminescence measurements. Distinct differences in the upconversion luminescence intensity, the spectral intensity distribution and the luminescence decay kinetics were found for the cubic and hexagonal lattice phases, respectively, corroborating the results of the standard analytical techniques used. In laser power dependent measurements of the upconversion luminescence intensity it was found that the green (G1, G2) and red (R) emission of Er3+ showed different effects of Tan on the number of required photons reflecting the differences in the population routes of different energy levels involved. Furthermore, the intensity ratio of Gfull/R is highly effected by the laser power only when the β-phase is present, whereas the G1/G2 intensity ratio is only slightly effected regardless of the crystal phase. Moreover, based on different upconversion luminescence kinetics characteristics of the cubic and hexagonal phase time-resolved area normalized emission spectra (TRANES) proved to be a very sensitive tool to monitor the phase transition between cubic and hexagonal phases. Based on the TRANES analysis it was possible to resolve the lattice phase transition in more detail for 200 °C < Tan < 300 °C, which was not possible with the standard techniques.
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Are individual differences in reading speed related to extrafoveal visual acuity and crowding?
(2015)
Readers differ considerably in their speed of self-paced reading. One factor known to influence fixation durations in reading is the preprocessing of words in parafoveal vision. Here we investigated whether individual differences in reading speed or the amount of information extracted from upcoming words (the preview benefit) can be explained by basic differences in extrafoveal vision-i.e., the ability to recognize peripheral letters with or without the presence of flanking letters. Forty participants were given an adaptive test to determine their eccentricity thresholds for the identification of letters presented either in isolation (extrafoveal acuity) or flanked by other letters (crowded letter recognition). In a separate eye-tracking experiment, the same participants read lists of words from left to right, while the preview of the upcoming words was manipulated with the gaze-contingent moving window technique. Relationships between dependent measures were analyzed on the observational level and with linear mixed models. We obtained highly reliable estimates both for extrafoveal letter identification (acuity and crowding) and measures of reading speed (overall reading speed, size of preview benefit). Reading speed was higher in participants with larger uncrowded windows. However, the strength of this relationship was moderate and it was only observed if other sources of variance in reading speed (e.g., the occurrence of regressive saccades) were eliminated. Moreover, the size of the preview benefit-an important factor in normal reading-was larger in participants with better extrafoveal acuity. Together, these results indicate a significant albeit moderate contribution of extrafoveal vision to individual differences in reading speed.
aspeed
(2015)
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.
An observational measure of anger regulation in middle childhood was developed that facilitated the in situ assessment of five maladaptive regulation strategies in response to an anger-eliciting task. 599 children aged 6-10 years (M = 8.12, SD = 0.92) participated in the study. Construct validity of the measure was examined through correlations with parent- and self-reports of anger regulation and anger reactivity. Criterion validity was established through links with teacher-rated aggression and social rejection measured by parent-, teacher-, and self-reports. The observational measure correlated significantly with parent- and self-reports of anger reactivity, whereas it was unrelated to parent- and self-reports of anger regulation. It also made a unique contribution to predicting aggression and social rejection.
In this paper we describe the recent state of our research
project concerning computer science teachers’ knowledge on students’
cognition. We did a comprehensive analysis of textbooks, curricula
and other resources, which give teachers guidance to formulate assignments.
In comparison to other subjects there are only a few concepts
and strategies taught to prospective computer science teachers in university.
We summarize them and given an overview on our empirical
approach to measure this knowledge.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
The results of streamflow trend studies are often characterized by mostly insignificant trends and inexplicable spatial patterns. In our study region, Western Austria, this applies especially for trends of annually averaged runoff. However, analysing the altitudinal aspect, we found that there is a trend gradient from higher-altitude to lower-altitude stations, i.e. a pattern of mostly positive annual trends at higher stations and negative ones at lower stations. At midaltitudes, the trends are mostly insignificant. Here we hypothesize that the streamflow trends are caused by the following two main processes: on the one hand, melting glaciers produce excess runoff at higher-altitude watersheds. On the other hand, rising temperatures potentially alter hydrological conditions in terms of less snowfall, higher infiltration, enhanced evapotranspiration, etc., which in turn results in decreasing streamflow trends at lower-altitude watersheds. However, these patterns are masked at mid-altitudes because the resulting positive and negative trends balance each other. To support these hypotheses, we attempted to attribute the detected trends to specific causes. For this purpose, we analysed trends of filtered daily streamflow data, as the causes for these changes might be restricted to a smaller temporal scale than the annual one. This allowed for the explicit determination of the exact days of year (DOYs) when certain streamflow trends emerge, which were then linked with the corresponding DOYs of the trends and characteristic dates of other observed variables, e.g. the average DOY when temperature crosses the freezing point in spring. Based on these analyses, an empirical statistical model was derived that was able to simulate daily streamflow trends sufficiently well. Analyses of subdaily streamflow changes provided additional insights. Finally, the present study supports many modelling approaches in the literature which found out that the main drivers of alpine streamflow changes are increased glacial melt, earlier snowmelt and lower snow accumulation in wintertime.
Babelsberg/RML
(2015)
New programming language designs are often evaluated on concrete implementations. However, in order to draw conclusions about the language design from the evaluation of concrete programming languages, these implementations need to be verified against the formalism of the design. To that end, we also have to ensure that the design actually meets its stated goals. A useful tool for the latter has been to create an executable semantics from a formalism that can execute a test suite of examples. However, this mechanism so far did not allow to verify an implementation against the design.
Babelsberg is a new design for a family of object-constraint languages. Recently, we have developed a formal semantics to clarify some issues in the design of those languages. Supplementing this work, we report here on how this formalism is turned into an executable operational semantics using the RML system. Furthermore, we show how we extended the executable semantics to create a framework that can generate test suites for the concrete Babelsberg implementations that provide traceability from the design to the language. Finally, we discuss how these test suites helped us find and correct mistakes in the Babelsberg implementation for JavaScript.
Catalytic amounts of a weak base are sufficient to induce the decomposition of anthracene endoperoxides to anthraquinone. The mechanism has been elucidated by isolation of intermediates in combination with DFT calculations. The whole process is suitable for the convenient generation of hydrogen peroxide under very mild conditions.
The transmission of wildlife zoonoses to humans depends, amongst others, on complex interactions of host population ecology and pathogen dynamics within host populations. In Europe, the Puumala virus (PUUV) causes nephropathia epidemica in humans. In this study we investigated complex interrelations within the epidemic system of PUUV and its rodent host, the bank vole (Myodes glareolus). We suggest that beech fructification and bank vole abundance are both decisive factors affecting human PUUV infections. While rodent host dynamics are expected to be directly linked to human PUUV infections, beech fructification is a rather indirect predictor by serving as food source for PUUV rodent hosts. Furthermore, we examined the dependence of bank vole abundance on beech fructification. We analysed a 12-year (2001-2012) time series of the parameters: beech fructification (as food resource for the PUUV host), bank vole abundance and human incidences from 7 Federal States of Germany. For the first time, we could show the direct interrelation between these three parameters involved in human PUUV epidemics and we were able to demonstrate on a large scale that human PUUV infections are highly correlated with bank vole abundance in the present year, as well as beech fructification in the previous year. By using beech fructification and bank vole abundance as predictors in one model we significantly improved the degree of explanation of human PUUV incidence. Federal State was included as random factor because human PUUV incidence varies considerably among states. Surprisingly, the effect of rodent abundance on human PUUV infections is less strong compared to the indirect effect of beech fructification. Our findings are useful to facilitate the development of predictive models for host population dynamics and the related PUUV infection risk for humans and can be used for plant protection and human health protection purposes.
Extract: Topics in psycholinguistics and the neurocognition of language rarely attract the attention of journalists or the general public. One topic that has done so, however, is the potential benefits of bilingualism for general cognitive functioning and development, and as a precaution against cognitive decline in old age. Sensational claims have been made in the public domain, mostly by journalists and politicians. Recently (September 4, 2014) The Guardian reported that “learning a foreign language can increase the size of your brain”, and Michael Gove, the UK's previous Education Secretary, noted in an interview with The Guardian (September 30, 2011) that “learning languages makes you smarter”. The present issue of BLC addresses these topics by providing a state-of-the-art overview of theoretical and experimental research on the role of bilingualism for cognition in children and adults.
The cell surface of cyanobacteria is covered with glycans that confer versatility and adaptability to a multitude of environmental factors. The complex carbohydrates act as barriers against different types of stress and play a role in intra- as well as inter-species interactions. In this review, we summarize the current knowledge of the chemical composition, biosynthesis and biological function of exo- and lipo-polysaccharides from cyanobacteria and give an overview of sugar-binding lectins characterized from cyanobacteria. We discuss similarities with well-studied enterobacterial systems and highlight the unique features of cyanobacteria. We pay special attention to colony formation and EPS biosynthesis in the bloom-forming cyanobacterium, Microcystis aeruginosa.
Double cyclization of short linear peptides obtained by solid phase peptide synthesis was used to prepare bridged bicyclic peptides (BBPs) corresponding to the topology of bridged bicyclic alkanes such as norbornane. Diastereomeric norbornapeptides were investigated by 1H-NMR, X-ray crystallography and CD spectroscopy and found to represent rigid globular scaffolds stabilized by intramolecular backbone hydrogen bonds with scaffold geometries determined by the chirality of amino acid residues and sharing structural features of β-turns and α-helices. Proteome profiling by capture compound mass spectrometry (CCMS) led to the discovery of the norbornapeptide 27c binding selectively to calmodulin as an example of a BBP protein binder. This and other BBPs showed high stability towards proteolytic degradation in serum.
Brief communication
(2015)
Accelerating climate change and increased economic and environmental interests in permafrost-affected regions have resulted in an acute need for more directed permafrost research. In June 2014, 88 early career researchers convened to identify future priorities for permafrost research. This multidisciplinary forum concluded that five research topics deserve greatest attention: permafrost landscape dynamics, permafrost thermal modeling, integration of traditional knowledge, spatial distribution of ground ice, and engineering issues. These topics underline the need for integrated research across a spectrum of permafrost-related domains and constitute a contribution to the Third International Conference on Arctic Research Planning (ICARP III).
BugHunt
(2015)
Competencies related to operating systems and computer
security are usually taught systematically. In this paper we present
a different approach, in which students have to remove virus-like
behaviour on their respective computers, which has been induced by
software developed for this purpose. They have to develop appropriate
problem-solving strategies and thereby explore essential elements of
the operating system. The approach was implemented exemplarily in
two computer science courses at a regional general upper secondary
school and showed great motivation and interest in the participating
students.
Faunal remains from Palaeolithic sites are important genetic sources to study preglacial and postglacial populations and to investigate the effect of climate change and human impact. Post mortem decay, resulting in fragmented and chemically modified DNA, is a key obstacle in ancient DNA analyses. In the absence of reliable methods to determine the presence of endogenous DNA in sub-fossil samples, temporal and spatial surveys of DNA survival on a regional scale may help to estimate the potential of faunal remains from a given time period and region. We therefore investigated PCR amplification success, PCR performance and post mortem damage in c. 47,000 to c. 12,000-year-old horse remains from 14 Palaeolithic sites along the Swiss Jura Mountains in relation to depositional context, tissue type, storage time and age, potentially influencing DNA preservation. The targeted 75 base pair mitochondrial DNA fragment could be amplified solely from equid remains from caves and not from any of the open dry and (temporary) wetland sites. Whether teeth are better than bones cannot be ultimately decided; however, both storage time after excavation and age significantly affect PCR amplification and performance, albeit not in a linear way. This is best explained by the—inevitable—heterogeneity of the data set. The extent of post mortem damage is not related to any of the potential impact factors. The results encourage comprehensive investigations of Palaeolithic cave sites, even from temperate regions.
Business Process Management has become an integral part of modern organizations in the private and public sector for improving their operations. In the course of Business Process Management efforts, companies and organizations assemble large process model repositories with many hundreds and thousands of business process models bearing a large amount of information. With the advent of large business process model collections, new challenges arise as structuring and managing a large amount of process models, their maintenance, and their quality assurance.
This is covered by business process architectures that have been introduced for organizing and structuring business process model collections. A variety of business process architecture approaches have been proposed that align business processes along aspects of interest, e. g., goals, functions, or objects. They provide a high level categorization of single processes ignoring their interdependencies, thus hiding valuable information. The production of goods or the delivery of services are often realized by a complex system of interdependent business processes. Hence, taking a holistic view at business processes interdependencies becomes a major necessity to organize, analyze, and assess the impact of their re-/design. Visualizing business processes interdependencies reveals hidden and implicit information from a process model collection.
In this thesis, we present a novel Business Process Architecture approach for representing and analyzing business process interdependencies on an abstract level. We propose a formal definition of our Business Process Architecture approach, design correctness criteria, and develop analysis techniques for assessing their quality. We describe a methodology for applying our Business Process Architecture approach top-down and bottom-up. This includes techniques for Business Process Architecture extraction from, and decomposition to process models while considering consistency issues between business process architecture and process model level. Using our extraction algorithm, we present a novel technique to identify and visualize data interdependencies in Business Process Data Architectures. Our Business Process Architecture approach provides business process experts,managers, and other users of a process model collection with an overview that allows reasoning about a large set of process models,
understanding, and analyzing their interdependencies in a facilitated way. In this regard we evaluated our Business Process Architecture approach in an experiment and provide implementations of selected techniques.
Direct assessment of attitudes toward socially sensitive topics can be affected by deception attempts. Reaction-time based indirect measures, such as the Implicit Association Test (IAT), are less susceptible to such biases. Neuroscientific evidence shows that deception can evoke characteristic ERP differences. However, the cerebral processes involved in faking an IAT are still unknown. We randomly assigned 20 university students (15 females, 24.65 +/- 3.50 years of age) to a counterbalanced repeated-measurements design, requesting them to complete a Brief-IAT (BIAT) on attitudes toward doping without deception instruction, and with the instruction to fake positive and negative doping attitudes. Cerebral activity during BIAT completion was assessed using high-density EEG. Event-related potentials during faking revealed enhanced frontal and reduced occipital negativity, starting around 150 ms after stimulus presentation. Further, a decrease in the P300 and LPP components was observed. Source analyses showed enhanced activity in the right inferior frontal gyrus between 150 and 200 ms during faking, thought to reflect the suppression of automatic responses. Further, more activity was found for faking in the bilateral middle occipital gyri and the bilateral temporoparietal junction. Results indicate that faking reaction-time based tests alter brain processes from early stages of processing and reveal the cortical sources of the effects. Analyzing the EEG helps to uncover response patterns in indirect attitude tests and broadens our understanding of the neural processes involved in such faking. This knowledge might be useful for uncovering faking in socially sensitive contexts, where attitudes are likely to be concealed.
The nitric oxide (NO)/soluble guanylate cyclase (sGC)/cyclic guanosine monophasphate (cGMP)-signalling pathway is impaired under oxidative stress conditions due to oxidation and subsequent loss of the prosthetic sGC heme group as observed in particular in chronic renal failure. Thus, the pool of heme free sGC is increased under pathological conditions. sGC activators such as cinaciguat selectively activate the heme free form of sGC and target the disease associated enzyme. In this study, a therapeutic effect of long-term activation of heme free sGC by the sGC activator cinaciguat was investigated in an experimental model of salt-sensitive hypertension, a condition that is associated with increased oxidative stress, heme loss from sGC and development of chronic renal failure. For that purpose Dahl/ss rats, which develop severe hypertension upon high salt intake, were fed a high salt diet (8% NaCl) containing either placebo or cinaciguat for 21 weeks. Cinaciguat markedly improved survival and ameliorated the salt-induced increase in blood pressure upon treatment with cinaciguat compared to placebo. Renal function was significantly improved in the cinaciguat group compared to the placebo group as indicated by a significantly improved glomerular filtration rate and reduced urinary protein excretion. This was due to anti-fibrotic and antiinflammatory effects of the cinaciguat treatment. Taken together, this is the first study showing that long-term activation of heme free sGC leads to renal protection in an experimental model of hypertension and chronic kidney disease. These results underline the promising potential of cinaciguat to treat renal diseases by targeting the disease associated heme free form of sGC.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Climate impacts on transocean dispersal and habitat in gray whales from the Pleistocene to 2100
(2015)
Arctic animals face dramatic habitat alteration due to ongoing climate change. Understanding how such species have responded to past glacial cycles can help us forecast their response to today's changing climate. Gray whales are among those marine species likely to be strongly affected by Arctic climate change, but a thorough analysis of past climate impacts on this species has been complicated by lack of information about an extinct population in the Atlantic. While little is known about the history of Atlantic gray whales or their relationship to the extant Pacific population, the extirpation of the Atlantic population during historical times has been attributed to whaling. We used a combination of ancient and modern DNA, radiocarbon dating and predictive habitat modelling to better understand the distribution of gray whales during the Pleistocene and Holocene. Our results reveal that dispersal between the Pacific and Atlantic was climate dependent and occurred both during the Pleistocene prior to the last glacial period and the early Holocene immediately following the opening of the Bering Strait. Genetic diversity in the Atlantic declined over an extended interval that predates the period of intensive commercial whaling, indicating this decline may have been precipitated by Holocene climate or other ecological causes. These first genetic data for Atlantic gray whales, particularly when combined with predictive habitat models for the year 2100, suggest that two recent sightings of gray whales in the Atlantic may represent the beginning of the expansion of this species' habitat beyond its currently realized range.
Closing yield gaps
(2015)
Global food production needs to be increased by 60-110% between 2005 and 2050 to meet growing food and feed demand. Intensification and/or expansion of agriculture are the two main options available to meet the growing crop demands. Land conversion to expand cultivated land increases GHG emissions and impacts biodiversity and ecosystem services. Closing yield gaps to attain potential yields may be a viable option to increase the global crop production. Traditional methods of agricultural intensification often have negative externalities. Therefore, there is a need to explore location-specific methods of sustainable agricultural intensification. We identified regions where the achievement of potential crop calorie production on currently cultivated land will meet the present and future food demand based on scenario analyses considering population growth and changes in dietary habits. By closing yield gaps in the current irrigated and rain-fed cultivated land, about 24% and 80% more crop calories can respectively be produced compared to 2000. Most countries will reach food self-sufficiency or improve their current food self-sufficiency levels if potential crop production levels are achieved. As a novel approach, we defined specific input and agricultural management strategies required to achieve the potential production by overcoming biophysical and socioeconomic constraints causing yield gaps. The management strategies include: fertilizers, pesticides, advanced soil management, land improvement, management strategies coping with weather induced yield variability, and improving market accessibility. Finally, we estimated the required fertilizers (N, P2O5, and K2O) to attain the potential yields. Globally, N-fertilizer application needs to increase by 45-73%, P2O5-fertilizer by 22-46%, and K2O-fertilizer by 2-3 times compared to the year 2010 to attain potential crop production. The sustainability of such agricultural intensification largely depends on the way management strategies for closing yield gaps are chosen and implemented.
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
Based on niche theory, closely related and morphologically similar species are not predicted to coexist due to overlap in resource and habitat use. Local assemblages of bats often contain cryptic taxa, which co-occur despite notable similarities in morphology and ecology. We measured in two different habitat types on Madagascar levels of stable carbon and nitrogen isotopes in hair (n = 103) and faeces (n = 57) of cryptic Vespertilionidae taxa to indirectly examine whether fine-grained trophic niche differentiation explains their coexistence. In the dry deciduous forest (Kirindy), six sympatric species ranged over 6.0% in delta N-15, i.e. two trophic levels, and 4.2% in delta C-13 with a community mean of 11.3% in delta N-15 and - 21.0% in delta C-13. In the mesic forest (Antsahabe), three sympatric species ranged over one trophic level (delta N-15: 2.4%, delta C-13: 1.0%) with a community mean of 8.0% delta N-15 and - 21.7% in delta C-13. Multivariate analyses and residual permutation of Euclidian distances in delta C-13- delta N-15 bi-plots revealed in both communities distinct stable isotope signatures and species separation for the hair samples among coexisting Vespertilionidae. Intraspecific variation in faecal and hair stable isotopes did not indicate that seasonal migration might relax competition and thereby facilitate the local co-occurrence of sympatric taxa.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient. We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.
Commentary
(2015)
Small scale distribution of insect root herbivores may promote plant species diversity by creating patches of different herbivore pressure. However, determinants of small scale distribution of insect root herbivores, and impact of land use intensity on their small scale distribution are largely unknown. We sampled insect root herbivores and measured vegetation parameters and soil water content along transects in grasslands of different management intensity in three regions in Germany. We calculated community-weighted mean plant traits to test whether the functional plant community composition determines the small scale distribution of insect root herbivores. To analyze spatial patterns in plant species and trait composition and insect root herbivore abundance we computed Mantel correlograms. Insect root herbivores mainly comprised click beetle (Coleoptera, Elateridae) larvae (43%) in the investigated grasslands. Total insect root herbivore numbers were positively related to community-weighted mean traits indicating high plant growth rates and biomass (specific leaf area, reproductive-and vegetative plant height), and negatively related to plant traits indicating poor tissue quality (leaf C/N ratio). Generalist Elaterid larvae, when analyzed independently, were also positively related to high plant growth rates and furthermore to root dry mass, but were not related to tissue quality. Insect root herbivore numbers were not related to plant cover, plant species richness and soil water content. Plant species composition and to a lesser extent plant trait composition displayed spatial autocorrelation, which was not influenced by land use intensity. Insect root herbivore abundance was not spatially autocorrelated. We conclude that in semi-natural grasslands with a high share of generalist insect root herbivores, insect root herbivores affiliate with large, fast growing plants, presumably because of availability of high quantities of food. Affiliation of insect root herbivores with large, fast growing plants may counteract dominance of those species, thus promoting plant diversity.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).
The paper presents two approaches to the development of
a Computer Science Competence Model for the needs of curriculum
development and evaluation in Higher Education. A normativetheoretical
approach is based on the AKT and ACM/IEEE curriculum
and will be used within the recommendations of the German
Informatics Society (GI) for the design of CS curricula. An empirically
oriented approach refines the categories of the first one with regard to
specific subject areas by conducting content analysis on CS curricula of
important universities from several countries. The refined model will be
used for the needs of students’ e-assessment and subsequent affirmative
action of the CS departments.
Computational Thinking
(2015)
Digital technology has radically changed the way people
work in industry, finance, services, media and commerce. Informatics
has contributed to the scientific and technological development of our
society in general and to the digital revolution in particular. Computational
thinking is the term indicating the key ideas of this discipline that
might be included in the key competencies underlying the curriculum
of compulsory education. The educational potential of informatics has
a history dating back to the sixties. In this article, we briefly revisit this
history looking for lessons learned. In particular, we focus on experiences
of teaching and learning programming. However, computational
thinking is more than coding. It is a way of thinking and practicing interactive
dynamic modeling with computers. We advocate that learners
can practice computational thinking in playful contexts where they can
develop personal projects, for example building videogames and/or robots,
share and discuss their construction with others. In our view, this
approach allows an integration of computational thinking in the K-12
curriculum across disciplines.
Concluding Remarks
(2015)
This paper originated from discussions about the need for
important changes in the curriculum for Computing including two focus
group meetings at IFIP conferences over the last two years. The
paper examines how recent developments in curriculum, together with
insights from curriculum thinking in other subject areas, especially mathematics
and science, can inform curriculum design for Computing.
The analysis presented in the paper provides insights into the complexity
of curriculum design as well as identifying important constraints and
considerations for the ongoing development of a vision and framework
for a Computing curriculum.
Background: Continuous treatment is an important indicator of medication adherence in dementia. However, long-term studies in larger clinical settings are lacking, and little is known about moderating effects of patient and service characteristics.
Methods: Data from 12,910 outpatients with dementia (mean age 79.2 years; SD = 7.6 years) treated between January 2003 and December 2013 in Germany were included. Continuous treatment was analysed using Kaplan-Meier curves and log-rank tests. In addition, multivariate Cox regression models were fitted with continuous treatment as dependent variable and the predictors antidementia agent, age, gender, medical comorbidities, physician specialty, and health insurance status.
Results: After one year of follow-up, nearly 60% of patients continued drug treatment. Donezepil (HR: 0.88; 95% CI: 0.82-0.95) and memantine (HR: 0.85; 0.79-0.91) patients were less likely to be discontinued treatment as compared to rivastigmine users. Patients were less likely to be discontinued if they were treated by specialist physicians as compared to general practitioners (HR: 0.44; 0.41-0.48). Younger male patients and patients who had private health insurance had a lower discontinuation risk. Regarding comorbidity, patients were more likely to be continuously treated with the index substance if a diagnosis of heart failure or hypertension had been diagnosed at baseline.
Conclusions: Our results imply that besides type of antidementia agent, involvement of a specialist in the complex process of prescribing antidementia drugs can provide meaningful benefits to patients, in terms of more disease-specific and continuous treatment.
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
Background African weakly-electric fishes of the family Mormyridae are able to produce and perceive weak electric signals (typically less than one volt in amplitude) owing to the presence of a specialized, muscle-derived electric organ (EO) in their tail region. Such electric signals, also known as Electric Organ Discharges (EODs), are used for objects/prey localization, for the identification of conspecifics, and in social and reproductive behaviour. This feature might have promoted the adaptive radiation of this family by acting as an effective pre-zygotic isolation mechanism. Despite the physiological and evolutionary importance of this trait, the investigation of the genetic basis of its function and modification has so far remained limited. In this study, we aim at: i) identifying constitutive differences in terms of gene expression between electric organ and skeletal muscle (SM) in two mormyrid species of the genus Campylomormyrus: C. compressirostris and C. tshokwe, and ii) exploring cross-specific patterns of gene expression within the two tissues among C. compressirostris, C. tshokwe, and the outgroup species Gnathonemus petersii. Results Twelve paired-end (100 bp) strand-specific RNA-seq Illumina libraries were sequenced, producing circa 330 M quality-filtered short read pairs. The obtained reads were assembled de novo into four reference transcriptomes. In silico cross-tissue DE-analysis allowed us to identify 271 shared differentially expressed genes between EO and SM in C. compressirostris and C.tshokwe. Many of these genes correspond to myogenic factors, ion channels and pumps, and genes involved in several metabolic pathways. Cross-species analysis has revealed that the electric organ transcriptome is more variable in terms of gene expression levels across species than the skeletal muscle transcriptome. Conclusions The data obtained indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ’s transcriptome; ii) the metabolic activity of the EO might be specialized towards the production and turn-over of membrane structures; iii) several ion channels are highly expressed in the EO in order to increase excitability; iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
Electron transfer (ET) reactions play a crucial role in the metabolic pathways of all organisms. In biotechnological approaches, the redox properties of the protein cytochrome c (cyt c), which acts as an electron shuttle in the respiratory chain, was utilized to engineer ET chains on electrode surfaces. With the help of the biopolymer DNA, the redox protein assembles into electro active multilayer (ML) systems, providing a biocompatible matrix for the entrapment of proteins.
In this study the characteristics of the cyt c and DNA interaction were defined on the molecular level for the first time and the binding sites of DNA on cyt c were identified. Persistent cyt c/DNA complexes were formed in solution under the assembly conditions of ML architectures, i.e. pH 5.0 and low ionic strength. At pH 7.0, no agglomerates were formed, permitting the characterization of the NMR spectroscopy. Using transverse relaxation-optimized spectroscopy (TROSY)-heteronuclear single quantum coherence (HSQC) experiments, DNAs’ binding sites on the protein were identified. In particular, negatively charged AA residues, which are known interaction sites in cyt c/protein binding were identified as the main contact points of cyt c and DNA.
Moreover, the sophisticated task of arranging proteins on electrode surfaces to create functional ET chains was addressed. Therefore, two different enzyme types, the flavin dependent fructose dehydrogenase (FDH) and the pyrroloquinoline quinone dependent glucose dehydrogenase (PQQ-GDH), were tested as reaction partners of freely diffusing cyt c and cyt c immobilized on electrodes in mono- and MLs. The characterisation of the ET processes was performed by means of electrochemistry and the protein deposition was monitored by microgravimetric measurements. FDH and PQQ-GDH were found to be generally suitable for combination with the cyt c/DNA ML system, since both enzymes interact with cyt c in solution and in the immobilized state. The immobilization of FDH and cyt c was achieved with the enzyme on top of a cyt c monolayer electrode without the help of a polyelectrolyte. Combining FDH with the cyt c/DNA ML system did not succeed, yet. However, the basic conditions for this protein-protein interaction were defined. PQQ-GDH was successfully coupled with the ML system, demonstrating that that the cyt c/DNA ML system provides a suitable interface for enzymes and that the creation of signal chains, based on the idea of co-immobilized proteins is feasible.
Future work may be directed to the investigation of cyt c/DNA interaction under the precise conditions of ML assembly. Therefore, solid state NMR or X-ray crystallography may be required. Based on the results of this study, the combination of FDH with the ML system should be addressed. Moreover, alternative types of enzymes may be tested as catalytic component of the ML assembly, aiming on the development of innovative biosensor applications.
Business process management (BPM) is a systematic and structured approach to model, analyze, control, and execute business operations also referred to as business processes that get carried out to achieve business goals. Central to BPM are conceptual models. Most prominently, process models describe which tasks are to be executed by whom utilizing which information to reach a business goal. Process models generally cover the perspectives of control flow, resource, data flow, and information systems.
Execution of business processes leads to the work actually being carried out. Automating them increases the efficiency and is usually supported by process engines. This, though, requires the coverage of control flow, resource assignments, and process data. While the first two perspectives are well supported in current process engines, data handling needs to be implemented and maintained manually. However, model-driven data handling promises to ease implementation, reduces the error-proneness through graphical visualization, and reduces development efforts through code generation.
This thesis addresses the modeling, analysis, and execution of data in business processes and presents a novel approach to execute data-annotated process models entirely model-driven. As a first step and formal grounding for the process execution, a conceptual framework for the integration of processes and data is introduced. This framework is complemented by operational semantics through a Petri net mapping extended with data considerations. Model-driven data execution comprises the handling of complex data dependencies, process data, and data exchange in case of communication between multiple process participants. This thesis introduces concepts from the database domain into BPM to enable the distinction of data operations, to specify relations between data objects of the same as well as of different types, to correlate modeled data nodes as well as received messages to the correct run-time process instances, and to generate messages for inter-process communication. The underlying approach, which is not limited to a particular process description language, has been implemented as proof-of-concept.
Automation of data handling in business processes requires data-annotated and correct process models. Targeting the former, algorithms are introduced to extract information about data nodes, their states, and data dependencies from control information and to annotate the process model accordingly. Usually, not all required information can be extracted from control flow information, since some data manipulations are not specified. This requires further refinement of the process model. Given a set of object life cycles specifying allowed data manipulations, automated refinement of the process model towards containment of all data manipulations is enabled. Process models are an abstraction focusing on specific aspects in detail, e.g., the control flow and the data flow views are often represented through activity-centric and object-centric process models. This thesis introduces algorithms for roundtrip transformations enabling the stakeholder to add information to the process model in the view being most appropriate.
Targeting process model correctness, this thesis introduces the notion of weak conformance that checks for consistency between given object life cycles and the process model such that the process model may only utilize data manipulations specified directly or indirectly in an object life cycle. The notion is computed via soundness checking of a hybrid representation integrating control flow and data flow correctness checking. Making a process model executable, identified violations must be corrected. Therefore, an approach is proposed that identifies for each violation multiple, alternative changes to the process model or the object life cycles.
Utilizing the results of this thesis, business processes can be executed entirely model-driven from the data perspective in addition to the control flow and resource perspectives already supported before. Thereby, the model creation is supported by algorithms partly automating the creation process while model consistency is ensured by data correctness checks.
In the last 10 years, the governments of most of the German Lander initiated administrative reforms. All of these ventures included the municipalization of substantial sets of tasks. As elsewhere, governments argue that service delivery by communes is more cost-efficient, effective and responsive. Empirical evidence to back these claims is inconsistent at best: a considerable number of case studies cast doubt on unconditionally positive appraisals. Decentralization effects seem to vary depending on the performance dimension and task considered. However, questions of generalizability arise as these findings have not yet been backed by more 'objective' archival data. We provide empirical evidence on decentralization effects for two different policy fields based on two studies. Thereby, the article presents alternative avenues for research on decentralization effects and matches the theoretical expectations on decentralization effects with more robust results. The analysis confirms that overly positive assertions concerning decentralization effects are only partially warranted. As previous case studies suggested, effects have to be looked at in a much more differentiated way, including starting conditions and distinguishing between the various relevant performance dimensions and policy fields.
The present study investigated the attribution of responsibility to victims and perpetrators in rape compared to robbery cases in Turkey. Each participant read three short case scenarios (vignettes) and completed items pertaining to the female victim and male perpetrator. The vignettes were systematically varied with regard to the type of crime that was committed (rape or robbery), the perpetrator’s coercive strategy (physical force or exploiting the victim’s alcohol-induced defenselessness), and the victim-perpetrator relationship prior to the incident (stranger, acquaintance, or ex-partner). Furthermore, participant gender and acceptance of rape myths (beliefs that justify or trivialize sexual violence) were taken into account. One half of the participants completed the rape myth acceptance (RMA) scales first and then received the vignettes, while the other half were given the vignettes first and then completed the RMA scales.
As expected, more blame was attributed to victims of rape than to victims of robbery. Conversely, perpetrators of rape were blamed less than perpetrators of robbery. The more participants endorsed rape myths, the more blame was attributed to the victim and the less blame was attributed to the perpetrators. Increasing levels of RMA were associated with an increase in victim blame (VB) in both rape and robbery cases, but the increase in rape VB was significantly more pronounced than in robbery VB. Increasing RMA was associated with an attenuation of perpetrator blame (PB) that was more pronounced for rape than for robbery cases, but the difference was not significant. As expected, victims of rape were blamed more when the perpetrator exploited their defenselessness due to alcohol intoxication than when they were overpowered by physical force. Contrary to the hypothesis, this was also true for robbery victims. Rape victims who knew their attacker (ex-partner or acquaintance) were blamed more than victims who were assaulted by strangers. Contrary to the hypothesis, robbery victims who were assaulted by an ex-partner were blamed more than acquaintance or stranger robbery victims. As predicted, the closer the relationship between victim and perpetrator, the less blame was attributed to perpetrators of rape while this factor had no effect on PB in robbery cases.
Men compared to women attributed more blame to the victims and less blame to the perpetrators. As expected, these gender differences in blame attributions were partially mediated by gender differences in RMA: After RMA was taken into account, the gender differences disappeared nearly completely for VB and were significantly reduced in PB. The order of presentation of the vignettes and the RMA measures was systematically varied to test the causal influence of RMA on rape blame attributions. The hypothesis that RMA causes VB and PB in rape cases (as opposed to the other way around or both are caused by a third variable) was not supported. Possible reasons for this failed manipulation and its implications for the mediation model are discussed.
With regard to blame attribution in rape cases, the present results match what was expected from previous studies which were mainly conducted in “Western” countries like the United States, the United Kingdom, or Germany. The present results support the notion that the victim-perpetrator relationship and the victim’s alcohol consumption are cross-culturally stable factors for blame attribution in rape cases. It was expected that blame attribution in robbery cases would be unaffected by the perpetrator’s coercive strategy and the victim-perpetrator relationship, but the results were inconsistent.
One unexpected effect is particularly noteworthy: When the perpetrator used physical force, more blame was attributed to rape than to robbery victims, but intoxicated victims were blamed more and almost equally so for both types of crime. Perpetrators who exploited drunk victims were blamed less in both rape and robbery cases. These results contradict German results collected with the German version of the same instruments (Bieneck & Krahé, 2011). Turkey is a Muslim country and alcohol is surrounded by a certain taboo. Possibly, the results reflect a cultural difference in that intoxicated victims are generally blamed more for their victimization and this factor is not limited to rape cases.
We describe a natural construction of deformation quantisation on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.
Continental rifts are excellent regions where the interplay between extension, the build-up of topography, erosion and sedimentation can be evaluated in the context of landscape evolution. Rift basins also constitute important archives that potentially record the evolution and migration of species and the change of sedimentary conditions as a result of climatic change. Finally, rifts have increasingly become targets of resource exploration, such as hydrocarbons or geothermal systems. The study of extensional processes and the factors that further modify the mainly climate-driven surface process regime helps to identify changes in past and present tectonic and geomorphic processes that are ultimately recorded in rift landscapes.
The Cenozoic East African Rift System (EARS) is an exemplary continental rift system and ideal natural laboratory to observe such interactions. The eastern and western branches of the EARS constitute first-order tectonic and topographic features in East Africa, which exert a profound influence on the evolution of topography, the distribution and amount of rainfall, and thus the efficiency of surface processes. The Kenya Rift is an integral part of the eastern branch of the EARS and is characterized by high-relief rift escarpments bounded by normal faults, gently tilted rift shoulders, and volcanic centers along the rift axis.
Considering the Cenozoic tectonic processes in the Kenya Rift, the tectonically controlled cooling history of rift shoulders, the subsidence history of rift basins, and the sedimentation along and across the rift, may help to elucidate the morphotectonic evolution of this extensional province. While tectonic forcing of surface processes may play a minor role in the low-strain rift on centennial to millennial timescales, it may be hypothesized that erosion and sedimentation processes impacted by climate shifts associated with pronounced changes in the availability in moisture may have left important imprints in the landscape.
In this thesis I combined thermochronological, geomorphic field observations, and morphometry of digital elevation models to reconstruct exhumation processes and erosion rates, as well as the effects of climate on the erosion processes in different sectors of the rift. I present three sets of results: (1) new thermochronological data from the northern and central parts of the rift to quantitatively constrain the Tertiary exhumation and thermal evolution of the Kenya Rift. (2) 10Be-derived catchment-wide mean denudation rates from the northern, central and southern rift that characterize erosional processes on millennial to present-day timescales; and (3) paleo-denudation rates in the northern rift to constrain climatically controlled shifts in paleoenvironmental conditions during the early Holocene (African Humid Period).
Taken together, my studies show that time-temperature histories derived from apatite fission track (AFT) analysis, zircon (U-Th)/He dating, and thermal modeling bracket the onset of rifting in the Kenya Rift between 65-50 Ma and about 15 Ma to the present. These two episodes are marked by rapid exhumation and, uplift of the rift shoulders. Between 45 and 15 Ma the margins of the rift experienced very slow erosion/exhumation, with the accommodation of sediments in the rift basin.
In addition, I determined that present-day denudation rates in sparsely vegetated parts of the Kenya Rift amount to 0.13 mm/yr, whereas denudation rates in humid and more densely vegetated sectors of the rift flanks reach a maximum of 0.08 mm/yr, despite steeper hillslopes. I inferred that hillslope gradient and vegetation cover control most of the variation in denudation rates across the Kenya Rift today. Importantly, my results support the notion that vegetation cover plays a fundamental role in determining the voracity of erosion of hillslopes through its stabilizing effects on the land surface.
Finally, in a pilot study I highlighted how paleo-denudation rates in climatic threshold areas changed significantly during times of transient hydrologic conditions and involved a sixfold increase in erosion rates during increased humidity. This assessment is based on cosmogenic nuclide (10Be) dating of quartzitic deltaic sands that were deposited in the northern Kenya Rift during a highstand of Lake Suguta, which was associated with the Holocene African Humid Period. Taken together, my new results document the role of climate variability in erosion processes that impact climatic threshold environments, which may provide a template for potential future impacts of climate-driven changes in surface processes in the course of Global Change.
Detection and Characterization of Wolf-Rayet stars in M81 with GTC/OSIRIS spectra and HST images
(2015)
Here we investigate a sample of young star clusters (YSCs) and other regions of recent star formation with Wolf-Rayet (W-R) features detected in the relatively nearby spiral galaxy M81 by analysing long-slit (LS) and Multi-Object Spectroscopy (MOS) spectra obtained with the OSIRIS instrument at the 10.4-m Gran Telescopio Canarias (GTC). We take advantage of the synergy between GTC spectra and Hubble Space Telescope (HST) images to also reveal their spatial localization and the environments hosting these stars. We finally discuss and comment on the next steps of our study.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
Development of geophysical methods to characterize methane hydrate reservoirs on a laboratory scale
(2015)
Gas hydrates are crystalline solids composed of water and gas molecules. They are stable at elevated pressure and low temperatures. Therefore, natural gas hydrate deposits occur at continental margins, permafrost areas, deep lakes, and deep inland seas. During hydrate formation, the water molecules rearrange to form cavities which host gas molecules. Due to the high pressure during hydrate formation, significant amounts of gas can be stored in hydrate structures. The water-gas ratio hereby can reach up to 1:172 at 0°C and atmospheric pressure. Natural gas hydrates predominantly contain methane. Because methane constitutes both a fuel and a greenhouse gas, gas hydrates are a potential energy resource as well as a potential source for greenhouse gas.
This study investigates the physical properties of methane hydrate bearing sediments on a laboratory scale. To do so, an electrical resistivity tomography (ERT) array was developed and mounted in a large reservoir simulator (LARS). For the first time, the ERT array was applied to hydrate saturated sediment samples under controlled temperature, pressure, and hydrate saturation conditions on a laboratory scale. Typically, the pore space of (marine) sediments is filled with electrically well conductive brine. Because hydrates constitute an electrical isolator, significant contrasts regarding the electrical properties of the pore space emerge during hydrate formation and dissociation. Frequent measurements during hydrate formation experiments permit the recordings of the spatial resistivity distribution inside LARS. Those data sets are used as input for a new data processing routine which transfers the spatial resistivity distribution into the spatial distribution of hydrate saturation. Thus, the changes of local hydrate saturation can be monitored with respect to space and time.
This study shows that the developed tomography yielded good data quality and resolved even small amounts of hydrate saturation inside the sediment sample. The conversion algorithm transforming the spatial resistivity distribution into local hydrate saturation values yielded the best results using the Archie-var-phi relation. This approach considers the increasing hydrate phase as part of the sediment frame, metaphorically reducing the sample’s porosity. In addition, the tomographical measurements showed that fast lab based hydrate formation processes cause small crystallites to form which tend to recrystallize.
Furthermore, hydrate dissociation experiments via depressurization were conducted in order to mimic the 2007/2008 Mallik field trial. It was observed that some patterns in gas and water flow could be reproduced, even though some setup related limitations arose.
In two additional long-term experiments the feasibility and performance of CO2-CH4 hydrate exchange reactions were studied in LARS. The tomographical system was used to monitor the spatial hydrate distribution during the hydrate formation stage. During the subsequent CO2 injection, the tomographical array allowed to follow the CO2 migration front inside the sediment sample and helped to identify the CO2 breakthrough.
The plasmon resonance of metal nanoparticles determines their optical response in the visible spectral range. Many details such as the electronic properties of gold near the particle surface and the local environment of the particles influence the spectra. We show how the cheap but highly precise fabrication of composite nanolayers by spin-assisted layer-by-layer deposition of polyelectrolytes can be used to investigate the spectral response of gold nanospheres (GNS) and gold nanorods (GNR) in a self-consistent way, using the established Maxwell–Garnett effective medium (MGEM) theory beyond the limit of homogeneous media. We show that the dielectric function of gold nanoparticles differs from the bulk value and experimentally characterize the shape and the surrounding of the particles thoroughly by SEM, AFM and ellipsometry. Averaging the dielectric functions of the layered surrounding by an appropriate weighting with the electric field intensity yields excellent agreement for the spectra of several nanoparticles and nanorods with various cover-layer thicknesses.
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. We investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on working memory performance. Young adults were exposed to breaks involving (i) eyes-open resting, (ii) listening to music and (iii) playing the video game “Angry Birds” before performing the n-back working memory task. Based on linear mixed-effects modeling, we found that playing the “Angry Birds” video game during a short learning break led to a decline in task performance over the course of the task as compared to eyes-open resting and listening to music, although overall task performance was not impaired. This effect was associated with high levels of daily mind wandering and low self-reported ability to concentrate. These findings indicate that video games can negatively affect working memory performance over time when played in between learning tasks. We suggest further investigation of these effects because of their relevance to everyday activity.
We discuss our most recent findings on the diffuse X-ray emission within Wolf-Rayet (WR) nebulae. The best-quality X-ray observations of these objects are those performed by XMM- Newton and Chandra towards S 308, NGC 2359, and NGC 6888. Even though these three WR nebulae might have different formation scenarios, they all share similar characteristics: i) the main plasma temperatures of the X-ray-emitting gas is found to be T =[1–2]×^K, ii) the diffuse X-ray emission is confined inside the [O iii] shell, and iii) their X-ray luminosities and electron densities in the 0.3–2.0 keV energy range are LX ≈10^33–10^34 erg s-1 and ne ≈0.1–1 cm^-3 . These properties and the nebular-like abundances of the hot gas suggest mixing and/or thermal conduction is taking an important rôle reducing the temperature of the hot bubble.
Efficacy, Safety & Modification of Albuminuria in Type 2 Diabetes Subjects with Renal Disease with LINAgliptin (MARLINA-T2D), a multicentre, multinational, randomized, double-blind, placebo-controlled, parallel-group, phase 3b clinical trial, aims to further define the potential renal effects of dipeptidyl peptidase-4 inhibition beyond glycaemic control. A total of 350 eligible individuals with inadequately controlled type 2 diabetes and evidence of renal disease are planned to be randomized in a 1:1 ratio to receive either linagliptin 5mg or placebo in addition to their stable glucose-lowering background therapy for 24weeks. Two predefined main endpoints will be tested in a hierarchical manner: (1) change from baseline in glycated haemoglobin and (2) time-weighted average of percentage change from baseline in urinary albumin-to-creatinine ratio. Both endpoints are sufficiently powered to test for superiority versus placebo after 24weeks with =0.05. MARLINA-T2D is the first of its class to prospectively explore both the glucose- and albuminuria-lowering potential of a dipeptidyl peptidase-4 inhibitor in patients with type 2 diabetes and evidence of renal disease.
Children’s poor performance on object relative clauses has been explained in terms of intervention locality. This approach predicts that object relatives with a full DP head and an embedded pronominal subject are easier than object relatives in which both the head noun and the embedded subject are full DPs. This prediction is shared by other accounts formulated to explain processing mechanisms. We conducted a visual-world study designed to test the off-line comprehension and on-line processing of object relatives in German-speaking 5-year-olds. Children were tested on three types of object relatives, all having a full DP head noun and differing with respect to the type of nominal phrase that appeared in the embedded subject position: another full DP, a 1st- or a 3rd-person pronoun. Grammatical skills and memory capacity were also assessed in order to see whether and how they affect children’s performance. Most accurately processed were object relatives with 1st-person pronoun, independently of children’s language and memory skills. Performance on object relatives with two full DPs was overall more accurate than on object relatives with 3rd-person pronoun. In the former condition, children with stronger grammatical skills accurately processed the structure and their memory abilities determined how fast they were; in the latter condition, children only processed accurately the structure if they were strong both in their grammatical skills and in their memory capacity. The results are discussed in the light of accounts that predict different pronoun effects like the ones we find, which depend on the referential properties of the pronouns. We then discuss which role language and memory abilities might have in processing object relatives with various embedded nominal phrases.
In this paper we propose an algorithm to distinguish between light- and heavy-tailed probability laws underlying random datasets. The idea of the algorithm, which is visual and easy to implement, is to check whether the underlying law belongs to the domain of attraction of the Gaussian or non-Gaussian stable distribution by examining its rate of convergence. The method allows to discriminate between stable and various non-stable distributions. The test allows to differentiate between distributions, which appear the same according to standard Kolmogorov-Smirnov test. In particular, it helps to distinguish between stable and Student's t probability laws as well as between the stable and tempered stable, the cases which are considered in the literature as very cumbersome. Finally, we illustrate the procedure on plasma data to identify cases with so-called L-H transition.
Participants of this workshop will be confronted exemplarily
with a considerable inconsistency of global Informatics education at
lower secondary level. More importantly, they are invited to contribute
actively on this issue in form of short case studies of their countries.
Until now, very few countries have been successful in implementing
Informatics or Computing at primary and lower secondary level. The
spectrum from digital literacy to informatics, particularly as a discipline
in its own right, has not really achieved a breakthrough and seems to
be underrepresented for these age groups. The goal of this workshop
is not only to discuss the anamnesis and diagnosis of this fragmented
field, but also to discuss and suggest viable forms of therapy in form of
setting educational standards. Making visible good practices in some
countries and comparing successful approaches are rewarding tasks for
this workshop.
Discussing and defining common educational standards on a transcontinental
level for the age group of 14 to 15 years old students in a readable,
assessable and acceptable form should keep the participants of this
workshop active beyond the limited time at the workshop.
Thermal permafrost degradation and coastal erosion in the Arctic remobilize substantial amounts of organic carbon (OC) and nutrients which have accumulated in late Pleistocene and Holocene unconsolidated deposits. Permafrost vulnerability to thaw subsidence, collapsing coastlines and irreversible landscape change are largely due to the presence of large amounts of massive ground ice such as ice wedges. However, ground ice has not, until now, been considered to be a source of dissolved organic carbon (DOC), dissolved inorganic carbon (DIC) and other elements which are important for ecosystems and carbon cycling. Here we show, using biogeochemical data from a large number of different ice bodies throughout the Arctic, that ice wedges have the greatest potential for DOC storage, with a maximum of 28.6 mg L-1 (mean: 9.6 mg L-1). Variation in DOC concentration is positively correlated with and explained by the concentrations and relative amounts of typically terrestrial cations such as Mg2+ and K+. DOC sequestration into ground ice was more effective during the late Pleistocene than during the Holocene, which can be explained by rapid sediment and OC accumulation, the prevalence of more easily degradable vegetation and immediate incorporation into permafrost. We assume that pristine snowmelt is able to leach considerable amounts of well-preserved and highly bioavailable DOC as well as other elements from surface sediments, which are rapidly frozen and stored in ground ice, especially in ice wedges, even before further degradation. We found that ice wedges in the Yedoma region represent a significant DOC (45.2 Tg) and DIC (33.6 Tg) pool in permafrost areas and a freshwater reservoir of 4200 km(2). This study underlines the need to discriminate between particulate OC and DOC to assess the availability and vulnerability of the permafrost car-bon pool for ecosystems and climate feedback upon mobilization.
In the Bateson–Dobzhansky–Muller model of genetic incompatibilities post-zygotic gene-flow barriers arise by fixation of novel alleles at interacting loci in separated populations. Many such incompatibilities are polymorphic in plants, implying an important role for genetic drift or balancing selection in their origin and evolution. Here we show that NPR1 and RPP5 loci cause a genetic incompatibility between the incipient species Capsella grandiflora and C. rubella, and the more distantly related C. rubella and C. orientalis. The incompatible RPP5 allele results from a mutation in C. rubella, while the incompatible NPR1 allele is frequent in the ancestral C. grandiflora. Compatible and incompatible NPR1 haplotypes are maintained by balancing selection in C. grandiflora, and were divergently sorted into the derived C. rubella and C. orientalis. Thus, by maintaining differentiated alleles at high frequencies, balancing selection on ancestral polymorphisms can facilitate establishing gene-flow barriers between derived populations through lineage sorting of the alternative alleles.
The evolution of massive stars is strongly influenced by their initial chemical composition. We have computed rapidly-rotating massive star models with low metallicity (∼1/50 Z⊙) that evolve chemically homogeneously and have optically-thin winds during the main sequence evolution. These luminous and hot stars are predicted to emit intense mid- and far-UV radiation, but without the broad emission lines that characterize WR stars with optically-thick winds. We show that such Transparent Wind UV-Intense (TWUIN) stars may be responsible for the high number of He ii ionizing photons observed in metal-poor dwarf galaxies, such as IZw 18. We find that these TWUIN stars are possible long-duration gamma-ray burst progenitors.
The neurophysiological and behavioral correlates of action-related language processing have been debated for long time. A precursor in this field was the study by Buccino et al. (2005) combining transcranial magnetic stimulation (TMS) and behavioral measures (reaction times, RTs) to study the effect of listening to hand- and foot-related sentences. In the TMS experiment, the authors showed a decrease of motor evoked potentials (MEPs) recorded from hand muscles when processing hand-related verbs as compared to foot-related verbs. Similarly, MEPs recorded from leg muscles decreased when participants processed foot-related as compared to hand-related verbs. In the behavioral experiment, using the same stimuli and a semantic decision task the authors found slower RTs when the participants used the body effector (hand or foot) involved in the actual execution of the action expressed by the presented verb to give their motor responses. These findings were interpreted as an interference effect due to a simultaneous involvement of the motor system in both a language and a motor task. Our replication aimed to enlarge the sample size and replicate the findings with higher statistical power. The TMS experiment showed a significant modulation of hand MEPs, but in the sense of a motor facilitation when processing hand-related verbs. On the contrary, the behavioral experiment did not show significant results. The results are discussed within the general debate on the time-course of the modulation of motor cortex during implicit and explicit language processing and in relation to the studies on action observation/understanding.
Dual-normal logic programs
(2015)
Disjunctive Answer Set Programming is a powerful declarative programming paradigm with complexity beyond NP. Identifying classes of programs for which the consistency problem is in NP is of interest from the theoretical standpoint and can potentially lead to improvements in the design of answer set programming solvers. One of such classes consists of dual-normal programs, where the number of positive body atoms in proper rules is at most one. Unlike other classes of programs, dual-normal programs have received little attention so far. In this paper we study this class. We relate dual-normal programs to propositional theories and to normal programs by presenting several inter-translations. With the translation from dual-normal to normal programs at hand, we introduce the novel class of body-cycle free programs, which are in many respects dual to head-cycle free programs. We establish the expressive power of dual-normal programs in terms of SE- and UE-models, and compare them to normal programs. We also discuss the complexity of deciding whether dual-normal programs are strongly and uniformly equivalent.
Carbon-rich Wolf-Rayet stars are efficient carbon dust makers. Despite the strong evidence for dust formation in these objects provided by infrared thermal emission from dust, the routes to nucleation and condensation and the physical conditions required for dust production are still poorly understood. We discuss here the potential routes to carbon dust and the possible locations conducive to dust formation in the colliding winds of WC binaries.
Dynamic C and N stocks
(2015)
The drainage and cultivation of fen peatlands create complex small-scale mosaics of soils with extremely variable soil organic carbon (SOC) stocks and groundwater levels (GWLs). To date, the significance of such sites as sources or sinks for greenhouse gases such as CO2 and CH4 is still unclear, especially if the sites are used for cropland. As individual control factors such as GWL fail to account for this complexity, holistic approaches combining gas fluxes with the underlying processes are required to understand the carbon (C) gas exchange of drained fens. It can be assumed that the stocks of SOC and N located above the variable GWL - defined as dynamic C and N stocks - play a key role in the regulation of the plant- and microbially mediated CO2 fluxes in these soils and, inversely, for CH4. To test this assumption, the present study analysed the C gas exchange (gross primary production - GPP; ecosystem respiration - R-eco; net ecosystem exchange - NEE; CH4) of maize using manual chambers for 4 years. The study sites were located near Paulinenaue, Germany, where we selected three soil types representing the full gradient of GWL and SOC stocks (0-1 m) of the landscape: (a) Haplic Arenosol (AR; 8 kg C m(-2)); (b) Mollic Gleysol (GL; 38 kg C m(-2)); and (c) Hemic Histosol (HS; 87 kg C m(-2)). Daily GWL data were used to calculate dynamic SOC (SOCdyn) and N (N-dyn) stocks.
Average annual NEE differed considerably among sites, ranging from 47 +/- 30 g C m(-2) yr(-1) in AR to -305 +/- 123 g C m(-2) yr(-1) in GL and -127 +/- 212 g C m(-2) yr(-1) in HS. While static SOC and N stocks showed no significant effect on C fluxes, SOCdyn and N-dyn and their interaction with GWL strongly influenced the C gas exchange, particularly NEE and the GPP : R-eco ratio. Moreover, based on nonlinear regression analysis, 86% of NEE variability was explained by GWL and SOCdyn. The observed high relevance of dynamic SOC and N stocks in the aerobic zone for plant and soil gas exchange likely originates from the effects of GWL-dependent N availability on C formation and transformation processes in the plant-soil system, which promote CO2 input via GPP more than CO2 emission via R-eco.
The process-oriented approach of dynamic C and N stocks is a promising, potentially generalisable method for system-oriented investigations of the C gas exchange of groundwater-influenced soils and could be expanded to other nutrients and soil characteristics. However, in order to assess the climate impact of arable sites on drained peatlands, it is always necessary to consider the entire range of groundwater-influenced mineral and organic soils and their respective areal extent within the soil landscape.
Welfare states and policies have changed greatly over the past decades, mostly characterized by retrenchments in terms of government spending or in terms of restricted access to certain benefits. In the area of family policies, however, a lot of countries have simultaneously expanded provisions and transfers for families. Bringing together the macro analysis of policy variation and household income changes on the micro-level, the main research question of the dissertation is to what extent economic consequences following separation and divorce in families with children have changed between the 1980s and the 2000s in Germany and the United States. The second research question of the dissertation regards the differences in dissolution outcomes between married and cohabiting parents in Germany.
The dissertation thus aims to link institutional regulations of welfare states with the actual income situation of families. To achieve this, a research design was developed that has never been used for the analysis of the economic consequences of family dissolution. For this, the two longest running panel datasets, German Socio-economic Panel (GSOEP) and the US American Panel Study of Income Dynamics (PSID), have been used. The analytic strategy applied to estimate the effects of family dissolution on household income is a difference-in-difference design combined with coarsened exact matching (CEM).
To begin with, the dissertation confirmed many findings of previous research, for example regarding the gender differences in family dissolution outcomes. Mothers experience clearly higher relative income losses and consequently higher risks of poverty than fathers. This finding is universal, that is it holds for both countries, for all time periods observed, and for all measures of economic outcome that were employed. Another confirmed finding is the higher level of welfare state intervention in Germany compared to the United States.
The dissertation also revealed a number of novel findings. The results show that the expansion of family policies in Germany over time has not been accompanied by substantially decreasing income losses for mothers. Though income losses have slightly decreased over time, they have become more persistent during the years following family dissolution. The impact of the German welfare state has meanwhile been quite stable.
American mothers’ income losses took place on a slightly lower level than those of German mothers. Only during the 1980s their relative losses were clearly lower than those of German mothers. And also American mothers did not recover as much from their income losses during the 2000s than they used to during the 1980s. For them, the 1996 welfare reform brought a considerable decrease in welfare state support. Accordingly, the results for American mothers can certainly be described as a shift from public to private provision.
The general finding of previous studies that fathers do not have to suffer income losses, or if at all rather moderate ones compared to mothers, can be confirmed. Nevertheless, both German and US American fathers face a deterioration of the economic consequences of family dissolution over time. German fathers’ relative income changes are still positive though they have decreased over time. One reason for this decrease is the increasing loss of partner earnings following union dissolution. Also among American fathers, income gains still prevail in the year of family dissolution. Two years later, however, they have been facing income losses already since the 1980s which have furthermore increased considerably over time.
Zooming in on Germany, family dissolution outcomes by marital status show negligible differences between cohabiting and married mothers in disposable income, but considerable differences in losses of income before taxes and transfers. It is the impact of the welfare state that equalizes the differences in income losses between these two groups of mothers. For married mothers, losses are not as high in the year of event but they have difficulties to recover from these losses. Without the income buffering of the welfare state, married mothers would, three years after family dissolution, remain with relative income losses double as high as for cohabiting mothers.
Compared to mothers, differences between married and cohabiting fathers are visible in changes of income before as well as after taxes and transfers. The welfare state does not alter the difference between the two groups of fathers. With regard to both income concepts, cohabiting fathers fare worse than married fathers. Cohabiting fathers suffer moderate income losses of disposable income while married fathers experience moderate income gains. Accounting for support payments is decisive for fathers’ income changes. If these payments are not deducted from disposable income, both married and cohabiting fathers experience gains in disposable income following family dissolution.
Effect of mass wasting on soil organic carbon storage and coastal erosion in permafrost environments
(2015)
Accelerated permafrost thaw under the warming Arctic climate can have a significant impact on Arctic landscapes. Areas underlain by permafrost store high amounts of soil organic carbon (SOC). Permafrost disturbances may contribute to increased release of carbon dioxide and methane to the atmosphere. Coastal erosion, amplified through a decrease in Arctic sea-ice extent, may also mobilise SOC from permafrost. Large expanses of permafrost affected land are characterised by intense mass-wasting processes such as solifluction, active-layer detachments and retrogressive thaw slumping. Our aim is to assess the influence of mass wasting on SOC storage and coastal erosion.
We studied SOC storage on Herschel Island by analysing active-layer and permafrost samples, and compared non-disturbed sites to those characterised by mass wasting. Mass-wasting sites showed decreased SOC storage and material compaction, whereas sites characterised by material accumulation showed increased storage. The SOC storage on Herschel Island is also significantly correlated to catenary position and other slope characteristics. We estimated SOC storage on Herschel Island to be 34.8 kg C m-2. This is comparable to similar environments in northwest Canada and Alaska.
Coastal erosion was analysed using high resolution digital elevation models (DEMs). Two LIDAR scanning of the Yukon Coast were done in 2012 and 2013. Two DEMs with 1 m horizontal resolution were generated and used to analyse elevation changes along the coast. The results indicate considerable spatial variability in short-term coastline erosion and progradation. The high variability was related to the presence of mass-wasting processes. Erosion and deposition extremes were recorded where the retrogressive thaw slump (RTS) activity was most pronounced. Released sediment can be transported by longshore drift and affects not only the coastal processes in situ but also along adjacent coasts.
We also calculated volumetric coastal erosion for Herschel Island by comparing a stereo-photogrammetrically derived DEM from 2004 with LIDAR DEMs. We compared this volumetric erosion to planimetric erosion, which was based on coastlines digitised from satellite imagery. We found a complex relationship between planimetric and volumetric coastal erosion, which we attribute to frequent occurrence of mass-wasting processes along the coasts. Our results suggest that volumetric erosion corresponds better with environmental forcing and is more suitable for the estimation of organic carbon fluxes than planimetric erosion.
Mass wasting can decrease SOC storage by several mechanisms. Increased aeration following disturbance may increase microbial activity, which accelerates organic matter decomposition. New hydrological conditions that follow the mass wasting event can cause leaching of freshly exposed material. Organic rich material can also be directly removed into the sea or into a lake. On the other hand the accumulation of mobilised material can result in increased SOC storage. Mass-wasting related accumulations of mobilised material can significantly impact coastal erosion in situ or along the adjacent coast by longshore drift. Therefore, the coastline movement observations cannot completely resolve the actual sediment loss due to these temporary accumulations. The predicted increase of mass-wasting activity in the course of Arctic warming may increase SOC mobilisation and coastal erosion induced carbon fluxes.
40Ar/39Ar in situ UV laser ablation of white mica, Rb–Sr mineral isochrons and zircon fission track dating were applied to determine ages of very low- to low-grade metamorphic processes at 3.5±0.4 kbar, 280±30°C in the Avalonian Mira terrane of SE Cape Breton Island (Nova Scotia). The Mira terrane comprises Neoproterozoic volcanic-arc rocks overlain by Cambrian sedimentary rocks. Crystallization of metamorphic white mica was dated in six metavolcanic samples by 40Ar/39Ar spot age peaks between 396±3 and 363±14 Ma. Rb–Sr systematics of minerals and mineral aggregates yielded two isochrons at 389±7 Ma and 365±8 Ma, corroborating equilibrium conditions during very low- to low-grade metamorphism. The dated white mica is oriented parallel to foliations produced by sinistral strike-slip faulting and/or folding related to the Middle–Late Devonian transpressive assembly of Avalonian terranes during convergence and emplacement of the neighbouring Meguma terrane. Exhumation occurred earlier in the NW Mira terrane than in the SE. Transpression was related to the closure of the Rheic Ocean between Gondwana and Laurussia by NW-directed convergence. The 40Ar/39Ar spot age spectra also display relict age peaks at 477–465 Ma, 439 Ma and 420–428 Ma attributed to deformation and fluid access, possibly related to the collision of Avalonia with composite Laurentia or to earlier Ordovician–Silurian rifting. Fission track ages of zircon from Mira terrane samples range between 242±18 and 225±21 Ma and reflect late Palaeozoic reburial and reheating close to previous peak metamorphic temperatures under fluid-absent conditions during rifting prior to opening of the Central Atlantic Ocean.
The term “bilateral deficit” (BLD) has been used to describe a reduction in performance during bilateral contractions when compared to the sum of identical unilateral contractions. In old age, maximal isometric force production (MIF) decreases and BLD increases indicating the need for training interventions to mitigate this impact in seniors. In a cross-sectional approach, we examined age-related differences in MIF and BLD in young (age: 20–30 years) and old adults (age: >65 years). In addition, a randomized-controlled trial was conducted to investigate training-specific effects of resistance vs. balance training on MIF and BLD of the leg extensors in old adults. Subjects were randomly assigned to resistance training (n = 19), balance training (n = 14), or a control group (n = 20). Bilateral heavy-resistance training for the lower extremities was performed for 13 weeks (3 × / week) at 80% of the one repetition maximum. Balance training was conducted using predominately unilateral exercises on wobble boards, soft mats, and uneven surfaces for the same duration. Pre- and post-tests included uni- and bilateral measurements of maximal isometric leg extension force. At baseline, young subjects outperformed older adults in uni- and bilateral MIF (all p < .001; d = 2.61–3.37) and in measures of BLD (p < .001; d = 2.04). We also found significant increases in uni- and bilateral MIF after resistance training (all p < .001, d = 1.8-5.7) and balance training (all p < .05, d = 1.3-3.2). In addition, BLD decreased following resistance (p < .001, d = 3.4) and balance training (p < .001, d = 2.6). It can be concluded that both training regimens resulted in increased MIF and decreased BLD of the leg extensors (HRT-group more than BAL-group), almost reaching the levels of young adults.
Introduction
We investigated blood glucose (BG) and hormone response to aerobic high-intensity interval exercise (HIIE) and moderate continuous exercise (CON) matched for mean load and duration in type 1 diabetes mellitus (T1DM).
Material and Methods
Seven trained male subjects with T1DM performed a maximal incremental exercise test and HIIE and CON at 3 different mean intensities below (A) and above (B) the first lactate turn point and below the second lactate turn point (C) on a cycle ergometer. Subjects were adjusted to ultra-long-acting insulin Degludec (Tresiba/Novo Nordisk, Denmark). Before exercise, standardized meals were administered, and short-acting insulin dose was reduced by 25% (A), 50% (B), and 75% (C) dependent on mean exercise intensity. During exercise, BG, adrenaline, noradrenaline, dopamine, cortisol, glucagon, and insulin-like growth factor-1, blood lactate, heart rate, and gas exchange variables were measured. For 24 h after exercise, interstitial glucose was measured by continuous glucose monitoring system.
Results
BG decrease during HIIE was significantly smaller for B (p = 0.024) and tended to be smaller for A and C compared to CON. No differences were found for post-exercise interstitial glucose, acute hormone response, and carbohydrate utilization between HIIE and CON for A, B, and C. In HIIE, blood lactate for A (p = 0.006) and B (p = 0.004) and respiratory exchange ratio for A (p = 0.003) and B (p = 0.003) were significantly higher compared to CON but not for C.
Conclusion
Hypoglycemia did not occur during or after HIIE and CON when using ultra-long-acting insulin and applying our methodological approach for exercise prescription. HIIE led to a smaller BG decrease compared to CON, although both exercises modes were matched for mean load and duration, even despite markedly higher peak workloads applied in HIIE. Therefore, HIIE and CON could be safely performed in T1DM.
Bioturbation contributes to soil formation and ecosystem functioning. With respect to the active transport of matter by voles, bioturbation may be considered as a very dynamic process among those shaping soil formation and biogeochemistry. The present study aimed at characterizing and quantifying the effects of bioturbation by voles on soil water relations and carbon and nitrogen stocks. Bioturbation effects were examined based on a field set up in a luvic arenosol comprising of eight 50 x 50 m enclosures with greatly different numbers of common vole (Microtus arvalis L., ca. 35-150 individuals ha(-1) mth(-1)). Eleven key soil variables were analyzed: bulk density, infiltration rate, saturated hydraulic conductivity, water holding capacity, contents of soil organic carbon (SOC) and total nitrogen (N), CO2 emission potential, C/N ratio, the stable isotopic signatures of C-13 and N-15, and pH. The highest vole densities were hypothesized to cause significant changes in some variables within 21 months. Results showed that land history had still a major influence, as eight key variables displayed an additional or sole influence of topography. However, the delta N-15 at depths of 10-20 and 20-30 cm decreased and increased with increasing vole numbers, respectively. Also the CO2 emission potential from soil collected at a depth of 15-30 cm decreased and the C/N ratio at 5-10 cm depth narrowed with increasing vole numbers. These variables indicated the first influence of voles on the respective mineralization processes in some soil layers. Tendencies of vole activity homogenizing SOC and N contents across layers were not significant. The results of the other seven key variables did not confirm significant effects of voles. Thus overall, we found mainly a first response of variables that are indicative for changes in biogeochemical dynamics but not yet of those representing changes in pools.
Background: Habitual walking speed predicts many clinical conditions later in life, but it declines with age. However, which particular exercise intervention can minimize the age-related gait speed loss is unclear.
Purpose: Our objective was to determine the effects of strength, power, coordination, and multimodal exercise training on healthy old adults' habitual and fast gait speed.
Methods: We performed a computerized systematic literature search in PubMed and Web of Knowledge from January 1984 up to December 2014. Search terms included 'Resistance training', 'power training', 'coordination training', 'multimodal training', and 'gait speed (outcome term). Inclusion criteria were articles available in full text, publication period over past 30 years, human species, journal articles, clinical trials, randomized controlled trials, English as publication language, and subject age C65 years. The methodological quality of all eligible intervention studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. We computed weighted average standardized mean differences of the intervention-induced adaptations in gait speed using a random-effects model and tested for overall and individual intervention effects relative to no-exercise controls.
Results: A total of 42 studies (mean PEDro score of 5.0 +/- 1.2) were included in the analyses (2495 healthy old adults; age 74.2 years [64.4-82.7]; body mass 69.9 +/- 4.9 kg, height 1.64 +/- 0.05 m, body mass index 26.4 +/- 1.9 kg/m(2), and gait speed 1.22 +/- 0.18 m/s). The search identified only one power training study, therefore the subsequent analyses focused only on the effects of resistance, coordination, and multimodal training on gait speed. The three types of intervention improved gait speed in the three experimental groups combined (n = 1297) by 0.10 m/s (+/- 0.12) or 8.4 % (+/- 9.7), with a large effect size (ES) of 0.84. Resistance (24 studies; n = 613; 0.11 m/s; 9.3 %; ES: 0.84), coordination (eight studies, n = 198; 0.09 m/s; 7.6 %; ES: 0.76), and multimodal training (19 studies; n = 486; 0.09 m/s; 8.4 %, ES: 0.86) increased gait speed statistically and similarly.
Conclusions: Commonly used exercise interventions can functionally and clinically increase habitual and fast gait speed and help slow the loss of gait speed or delay its onset.
Graph databases provide a natural way of storing and querying graph data. In contrast to relational databases, queries over graph databases enable to refer directly to the graph structure of such graph data. For example, graph pattern matching can be employed to formulate queries over graph data.
However, as for relational databases running complex queries can be very time-consuming and ruin the interactivity with the database. One possible approach to deal with this performance issue is to employ database views that consist of pre-computed answers to common and often stated queries. But to ensure that database views yield consistent query results in comparison with the data from which they are derived, these database views must be updated before queries make use of these database views. Such a maintenance of database views must be performed efficiently, otherwise the effort to create and maintain views may not pay off in comparison to processing the queries directly on the data from which the database views are derived.
At the time of writing, graph databases do not support database views and are limited to graph indexes that index nodes and edges of the graph data for fast query evaluation, but do not enable to maintain pre-computed answers of complex queries over graph data. Moreover, the maintenance of database views in graph databases becomes even more challenging when negation and recursion have to be supported as in deductive relational databases.
In this technical report, we present an approach for the efficient and scalable incremental graph view maintenance for deductive graph databases. The main concept of our approach is a generalized discrimination network that enables to model nested graph conditions including negative application conditions and recursion, which specify the content of graph views derived from graph data stored by graph databases. The discrimination network enables to automatically derive generic maintenance rules using graph transformations for maintaining graph views in case the graph data from which the graph views are derived change. We evaluate our approach in terms of a case study using multiple data sets derived from open source projects.