Refine
Has Fulltext
- yes (216)
Year of publication
- 2016 (216) (remove)
Document Type
- Postprint (216) (remove)
Keywords
- model (6)
- climate-change (5)
- German (4)
- evolution (4)
- Europe (3)
- Greenland (3)
- adaptation (3)
- ice-sheet (3)
- language (3)
- morphologically complex words (3)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (79)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (21)
- Institut für Physik und Astronomie (18)
- Institut für Biochemie und Biologie (16)
- Department Psychologie (14)
- Institut für Geowissenschaften (13)
- Strukturbereich Kognitionswissenschaften (13)
- Department Linguistik (9)
- Department Sport- und Gesundheitswissenschaften (8)
In low-accumulation regions, the reliability of delta O-18-derived temperature signals from ice cores within the Holocene is unclear, primarily due to the small climate changes relative to the intrinsic noise of the isotopic signal. In order to learn about the representativity of single ice cores and to optimise future ice-core-based climate reconstructions, we studied the stable-water isotope composition of firn at Kohnen Station, Dronning Maud Land, Antarctica. Analysing delta O-18 in two 50m long snow trenches allowed us to create an unprecedented, two-dimensional image characterising the isotopic variations from the centimetre to the 100-metre scale. Our results show seasonal layering of the isotopic composition but also high horizontal isotopic variability caused by local stratigraphic noise. Based on the horizontal and vertical structure of the isotopic variations, we derive a statistical noise model which successfully explains the trench data. The model further allows one to determine an upper bound for the reliability of climate reconstructions conducted in our study region at seasonal to annual resolution, depending on the number and the spacing of the cores taken.
Background
Overweight and obesity are increasing health problems that are not restricted to adults only. Childhood obesity is associated with metabolic, psychological and musculoskeletal comorbidities. However, knowledge about the effect of obesity on the foot function across maturation is lacking. Decreased foot function with disproportional loading characteristics is expected for obese children. The aim of this study was to examine foot loading characteristics during gait of normal-weight, overweight and obese children aged 1-12 years.
Methods
A total of 10382 children aged one to twelve years were enrolled in the study. Finally, 7575 children (m/f: n = 3630/3945; 7.0 +/- 2.9yr; 1.23 +/- 0.19m; 26.6 +/- 10.6kg; BMI: 17.1 +/- 2.4kg/m(2)) were included for (complete case) data analysis. Children were categorized to normalweight (>= 3rd and <90th percentile; n = 6458), overweight (>= 90rd and <97th percentile; n = 746) or obese (>97th percentile; n = 371) according to the German reference system that is based on age and gender-specific body mass indices (BMI). Plantar pressure measurements were assessed during gait on an instrumented walkway. Contact area, arch index (AI), peak pressure (PP) and force time integral (FTI) were calculated for the total, fore-, mid-and hindfoot. Data was analyzed descriptively (mean +/- SD) followed by ANOVA/Welch-test (according to homogeneity of variances: yes/no) for group differences according to BMI categorization (normal-weight, overweight, obesity) and for each age group 1 to 12yrs (post-hoc Tukey Kramer/Dunnett's C; alpha = 0.05).
Results
Mean walking velocity was 0.95 +/- 0.25 m/s with no differences between normal-weight, overweight or obese children (p = 0.0841). Results show higher foot contact area, arch index, peak pressure and force time integral in overweight and obese children (p< 0.001). Obese children showed the 1.48-fold (1 year-old) to 3.49-fold (10 year-old) midfoot loading (FTI) compared to normal-weight.
Conclusion
Additional body mass leads to higher overall load, with disproportional impact on the midfoot area and longitudinal foot arch showing characteristic foot loading patterns. Already the feet of one and two year old children are significantly affected. Childhood overweight and obesity is not compensated by the musculoskeletal system. To avoid excessive foot loading with potential risk of discomfort or pain in childhood, prevention strategies should be developed and validated for children with a high body mass index and functional changes in the midfoot area. The presented plantar pressure values could additionally serve as reference data to identify suspicious foot loading patterns in children.
We present a summary on the current status of two inversion algorithms that are used in EARLINET (European Aerosol Research Lidar Network) for the inversion of data collected with EARLINET multiwavelength Raman lidars. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. Development of these two algorithms started in 2000 when EARLINET was founded. The algorithms are based on a manually controlled inversion of optical data which allows for detailed sensitivity studies. The algorithms allow us to derive particle effective radius as well as volume and surface area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light absorption needs to be known with high accuracy. It is an extreme challenge to retrieve the real part with an accuracy better than 0.05 and the imaginary part with accuracy better than 0.005-0.1 or +/- 50 %. Single-scattering albedo can be computed from the retrieved microphysical parameters and allows us to categorize aerosols into high-and low-absorbing aerosols.
On the basis of a few exemplary simulations with synthetic optical data we discuss the current status of these manually operated algorithms, the potentially achievable accuracy of data products, and the goals for future work. One algorithm was used with the purpose of testing how well microphysical parameters can be derived if the real part of the complex refractive index is known to at least 0.05 or 0.1. The other algorithm was used to find out how well microphysical parameters can be derived if this constraint for the real part is not applied.
The optical data used in our study cover a range of Angstrom exponents and extinction-to-backscatter (lidar) ratios that are found from lidar measurements of various aerosol types. We also tested aerosol scenarios that are considered highly unlikely, e.g. the lidar ratios fall outside the commonly accepted range of values measured with Raman lidar, even though the underlying microphysical particle properties are not uncommon. The goal of this part of the study is to test the robustness of the algorithms towards their ability to identify aerosol types that have not been measured so far, but cannot be ruled out based on our current knowledge of aerosol physics.
We computed the optical data from monomodal logarithmic particle size distributions, i.e. we explicitly excluded the more complicated case of bimodal particle size distributions which is a topic of ongoing research work. Another constraint is that we only considered particles of spherical shape in our simulations. We considered particle radii as large as 7-10 mu m in our simulations where the Potsdam algorithm is limited to the lower value. We considered optical-data errors of 15% in the simulation studies. We target 50% uncertainty as a reasonable threshold for our data products, though we attempt to obtain data products with less uncertainty in future work.
We analyzed the population genetic pattern of 12 fragmented Geropogon hybridus ecological range edge populations in Israel along a steep precipitation gradient. In the investigation area (45 x 20 km(2)), the annual mean precipitation changes rapidly from 450 mm in the north (Mediterranean-influenced climate zone) to 300 mm in the south (semiarid climate zone) without significant temperature changes. Our analysis (91 individuals, 12 populations, 123 polymorphic loci) revealed strongly structured populations (AMOVA I broken vertical bar(ST) = 0.35; P < 0.001); however, differentiation did not change gradually toward range edge. IBD was significant (Mantel test r = 0.81; P = 0.001) and derived from sharply divided groups between the northernmost populations and the others further south, due to dispersal or environmental limitations. This was corroborated by the PCA and STRUCTURE analyses. IBD and IBE were significant despite the micro-geographic scale of the study area, which indicates that reduced precipitation toward range edge leads to population genetic divergence. However, this pattern diminished when the hypothesized gene flow barrier was taken into account. Applying the spatial analysis method revealed 11 outlier loci that were correlated to annual precipitation and, moreover, were indicative for putative precipitation-related adaptation (BAYESCAN, MCHEZA). The results suggest that even on micro-geographic scales, environmental factors play prominent roles in population divergence, genetic drift, and directional selection. The pattern is typical for strong environmental gradients, e.g., at species range edges and ecological limits, and if gene flow barriers and mosaic-like structures of fragmented habitats hamper dispersal.
Brief communication
(2016)
In March 2015, a new international blueprint for disaster risk reduction (DRR) was adopted in Sendai, Japan, at the end of the Third UN World Conference on Disaster Risk Reduction (WCDRR, 14-18 March 2015). We review and discuss the agreed commitments and targets, as well as the negotiation leading the Sendai Framework for DRR (SF-DRR) and discuss briefly its implication for the later UN-led negotiations on sustainable development goals and climate change.
We examined the spontaneous association between numbers and space by documenting attention deployment and the time course of associated spatial-numerical mapping with and without overt oculomotor responses. In Experiment 1, participants maintained central fixation while listening to number names. In Experiment 2, they made horizontal target-direct saccades following auditory number presentation. In both experiments, we continuously measured spontaneous ocular drift in horizontal space during and after number presentation. Experiment 2 also measured visual-probe-directed saccades following number presentation. Reliable ocular drift congruent with a horizontal mental number line emerged during and after number presentation in both experiments. Our results provide new evidence for the implicit and automatic nature of the oculomotor resonance effect associated with the horizontal spatial-numerical mapping mechanism.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.
Much research on language control in bilinguals has relied on the interpretation of the costs of switching between two languages. Of the two types of costs that are linked to language control, switching costs are assumed to be transient in nature and modulated by trial-specific manipulations (e.g., by preparation time), while mixing costs are supposed to be more stable and less affected by trial-specific manipulations. The present study investigated the effect of preparation time on switching and mixing costs, revealing that both types of costs can be influenced by trial-specific manipulations.
Antibodies against spike proteins of influenza are used as a tool for characterization of viruses and therapeutic approaches. However, development, production and quality control of antibodies is expensive and time consuming. To circumvent these difficulties, three peptides were derived from complementarity determining regions of an antibody heavy chain against influenza A spike glycoprotein. Their binding properties were studied experimentally, and by molecular dynamics simulations. Two peptide candidates showed binding to influenza A/Aichi/2/68 H3N2. One of them, termed PeB, with the highest affinity prevented binding to and infection of target cells in the micromolar region without any cytotoxic effect. PeB matches best the conserved receptor binding site of hemagglutinin. PeB bound also to other medical relevant influenza strains, such as human-pathogenic A/California/7/2009 H1N1, and avian-pathogenic A/MuteSwan/Rostock/R901/2006 H7N1. Strategies to improve the affinity and to adapt specificity are discussed and exemplified by a double amino acid substituted peptide, obtained by substitutional analysis. The peptides and their derivatives are of great potential for drug development as well as biosensing.
In this study, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain carbon nanodots (CNDs) with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch and Tris-acetate-EDTA (TAE) buffer as carbon sources. Three kinds of CNDs are prepared using different sets of above mentioned starting materials. The as-synthesized CNDs: C-CND (starch only), N-CND 1 (starch in TAE) and N-CND 2 (TAE only) exhibit highly homogenous PL and are ready to use without need for further purification. The CNDs are stable over a long period of time (>1 year) either in solution or as freeze-dried powder. Depending on starting material, CNDs with PL quantum yield (PLQY) ranging from less than 1% up to 28% are obtained. The influence of the precursor concentration, reaction time and type of additives on the optical properties (UV-Vis absorption, PL emission spectrum and PLQY) is carefully investigated, providing insight into the chemical processes that occur during CND formation. Remarkably, upon freeze-drying the initially brown CND-solution turns into a non-fluorescent white/slightly brown powder which recovers PL in aqueous solution and can potentially be applied as fluorescent marker in bio-imaging, as a reduction agent or as a photocatalyst.
The outermost cell layer of plants, the epidermis, and its outer (lateral) membrane domain facing the environment are continuously challenged by biotic and abiotic stresses. Therefore, the epidermis and the outer membrane domain provide important selective and protective barriers. However, only a small number of specifically outer membrane-localized proteins are known. Similarly, molecular mechanisms underlying the trafficking and the polar placement of outer membrane domain proteins require further exploration. Here, we demonstrate that ACTIN7 (ACT7) mediates trafficking of the PENETRATION3 (PEN3) outer membrane protein from the trans-Golgi network (TGN) to the plasma membrane in the root epidermis of Arabidopsis (Arabidopsis thaliana) and that actin function contributes to PEN3 endocytic recycling. In contrast to such generic ACT7-dependent trafficking from the TGN, the EXOCYST84b (EXO84b) tethering factor mediates PEN3 outer-membrane polarity. Moreover, precise EXO84b placement at the outer membrane domain itself requires ACT7 function. Hence, our results uncover spatially and mechanistically distinct requirements for ACT7 function during outer lateral membrane cargo trafficking and polarity establishment. They further identify an exocyst tethering complex mediator of outer lateral membrane cargo polarity.
Relatedness strongly influences social behaviors in a wide variety of species. For most species, the highest typical degree of relatedness is between full siblings with 50% shared genes. However, this is poorly understood in species with unusually high relatedness between individuals: clonal organisms. Although there has been some investigation into clonal invertebrates and yeast, nothing is known about kin selection in clonal vertebrates. We show that a clonal fish, the Amazon molly (Poecilia formosa), can distinguish between different clonal lineages, associating with genetically identical, sister clones, and use multiple sensory modalities. Also, they scale their aggressive behaviors according to the relatedness to other females: they are more aggressive to non-related clones. Our results demonstrate that even in species with very small genetic differences between individuals, kin recognition can be adaptive. Their discriminatory abilities and regulation of costly behaviors provides a powerful example of natural selection in species with limited genetic diversity.
Background:
Environmental stress puts organisms at risk and requires specific stress-tailored responses to maximize
survival. Long-term exposure to stress necessitates a global reprogramming of the cellular activities at different
levels of gene expression.
Results:
Here, we use ribosome profiling and RNA sequencing to globally profile the adaptive response of
Arabidopsis thaliana
to prolonged heat stress. To adapt to long heat exposure, the expression of many genes is
modulated in a coordinated manner at a transcriptional and translational level. However, a significant group of
genes opposes this trend and shows mainly translational regulation. Different secondary structure elements are
likely candidates to play a role in regulating translation of those genes.
Conclusions:
Our data also uncover on how the subunit stoichiometry of multimeric protein complexes in plastids
is maintained upon heat exposure.
Swets et al. (2008. Underspecification of syntactic ambiguities: Evidence from self-paced reading. Memory and Cognition, 36(1), 201–216) presented evidence that the so-called ambiguity advantage [Traxler et al. (1998). Adjunct attachment is not a form of lexical ambiguity resolution. Journal of Memory and Language, 39(4), 558–592], which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behaviour may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fitted three models that assume no underspecification. We find that a model without underspecification provides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favour of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative evidence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several dependent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods.
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and
their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
The aim of this study was to develop a one-step synthesis of gold nanotriangles (NTs) in the presence of mixed phospholipid vesicles followed by a separation process to isolate purified NTs. Negatively charged vesicles containing AOT and phospholipids, in the absence and presence of additional reducing agents (polyampholytes, polyanions or low molecular weight compounds), were used as a template phase to form anisotropic gold nanoparticles. Upon addition of the gold chloride solution, the nucleation process is initiated and both types of particles, i.e., isotropic spherical and anisotropic gold nanotriangles, are formed simultaneously. As it was not possible to produce monodisperse nanotriangles with such a one-step procedure, the anisotropic nanoparticles needed to be separated from the spherical ones. Therefore, a new type of separation procedure using combined polyelectrolyte/micelle depletion flocculation was successfully applied. As a result of the different purification steps, a green colored aqueous dispersion was obtained containing highly purified, well-defined negatively charged flat nanocrystals with a platelet thickness of 10 nm and an edge length of about 175 nm. The NTs produce promising results in surface-enhanced Raman scattering.
The onset of modern central Asian atmospheric circulation is traditionally linked to the interplay of surface uplift of the Mongolian and Tibetan-Himalayan orogens, retreat of the Paratethys sea from central Asia and Cenozoic global cooling. Although the role of these players has not yet been unravelled, the vast dust deposits of central China support the presence of arid conditions and modern atmospheric pathways for the last 25 million years (Myr). Here, we present provenance data from older (42-33 Myr) dust deposits, at a time when the Tibetan Plateau was less developed, the Paratethys sea still present in central Asia and atmospheric pCO(2) much higher. Our results show that dust sources and near-surface atmospheric circulation have changed little since at least 42 Myr. Our findings indicate that the locus of central Asian high pressures and concurrent aridity is a resilient feature only modulated by mountain building, global cooling and sea retreat.
Liverwort Blasia pusilla L. recruits soil nitrogen-fixing cyanobacteria of genus Nostoc as symbiotic partners. In this work we compared Nostoc community composition inside the plants and in the soil around them from two distant locations in Northern Norway. STRR fingerprinting and 16S rDNA phylogeny reconstruction showed a remarkable local diversity among isolates assigned to several Nostoc clades. An extensive web of negative allelopathic interactions was recorded at an agricultural site, but not at the undisturbed natural site. The cell extracts of the cyanobacteria did not show antimicrobial activities, but four isolates were shown to be cytotoxic to human cells. The secondary metabolite profiles of the isolates were mapped by MALDI-TOF MS, and the most prominent ions were further analyzed by Q-TOF for MS/MS aided identification. Symbiotic isolates produced a great variety of small peptide-like substances, most of which lack any record in the databases. Among identified compounds we found microcystin and nodularin variants toxic to eukaryotic cells. Microcystin producing chemotypes were dominating as symbiotic recruits but not in the free-living community. In addition, we were able to identify several novel aeruginosins and banyaside-like compounds, as well as nostocyclopeptides and nosperin.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
Editorial
(2016)
This two-wave longitudinal study examined how developmental changes in students’ mastery goal orientation, academic effort, and intrinsic motivation were predicted by student-perceived support of motivational support (support for autonomy, competence, and relatedness) in secondary classrooms. The study extends previous knowledge that showed that support for motivational support in class is related to students’ intrinsic motivation as it focused on the developmental changes of a set of different motivational variables and the relations of these changes to student-perceived motivational support in class. Thus, differential classroom effects on students’ motivational development were investigated. A sample of 1088 German students was assessed in the beginning of the school year when students were in grade 8 (Mean age D 13.70, SD D 0.53, 54% girls) and again at the end of the next school year when students were in grade 9. Results of latent change models showed a tendency toward decline in mastery goal orientation and a significant decrease in academic effort from grade 8 to 9. Intrinsic motivation did not decrease significantly across time. Student-perceived support of competence in class predicted the level and change in students’ academic effort. The findings emphasized that it is beneficial to create classroom learning environments that enhance students’ perceptions of competence in class when aiming to enhance students’ academic effort in secondary school classrooms.
Fluxes of organic and inorganic carbon within the Amazon basin are considerably controlled by annual flooding, which triggers the export of terrigenous organic material to the river and ultimately to the Atlantic Ocean. The amount of carbon imported to the river and the further conversion, transport and export of it depend on temperature, atmospheric CO2, terrestrial productivity and carbon storage, as well as discharge. Both terrestrial productivity and discharge are influenced by climate and land use change. The coupled LPJmL and RivCM model system (Langerwisch et al., 2016) has been applied to assess the combined impacts of climate and land use change on the Amazon riverine carbon dynamics. Vegetation dynamics (in LPJmL) as well as export and conversion of terrigenous carbon to and within the river (RivCM) are included. The model system has been applied for the years 1901 to 2099 under two deforestation scenarios and with climate forcing of three SRES emission scenarios, each for five climate models. We find that high deforestation (business-as-usual scenario) will strongly decrease (locally by up to 90 %) riverine particulate and dissolved organic carbon amount until the end of the current century. At the same time, increase in discharge leaves net carbon transport during the first decades of the century roughly unchanged only if a sufficient area is still forested. After 2050 the amount of transported carbon will decrease drastically. In contrast to that, increased temperature and atmospheric CO2 concentration determine the amount of riverine inorganic carbon stored in the Amazon basin. Higher atmospheric CO2 concentrations increase riverine inorganic carbon amount by up to 20% (SRES A2). The changes in riverine carbon fluxes have direct effects on carbon export, either to the atmosphere via outgassing or to the Atlantic Ocean via discharge. The outgassed carbon will increase slightly in the Amazon basin, but can be regionally reduced by up to 60% due to deforestation. The discharge of organic carbon to the ocean will be reduced by about 40% under the most severe deforestation and climate change scenario. These changes would have local and regional consequences on the carbon balance and habitat characteristics in the Amazon basin itself as well as in the adjacent Atlantic Ocean.
Climate change increases riverine carbon outgassing, while export to the ocean remains uncertain
(2016)
Any regular interaction of land and river during flooding affects carbon pools within the terrestrial system, riverine carbon and carbon exported from the system. In the Amazon basin carbon fluxes are considerably influenced by annual flooding, during which terrigenous organic material is imported to the river. The Amazon basin therefore represents an excellent example of a tightly coupled terrestrial-riverine system. The processes of generation, conversion and transport of organic carbon in such a coupled terrigenous-riverine system strongly interact and are climate-sensitive, yet their functioning is rarely considered in Earth system models and their response to climate change is still largely unknown. To quantify regional and global carbon budgets and climate change effects on carbon pools and carbon fluxes, it is important to account for the coupling between the land, the river, the ocean and the atmosphere. We developed the RIVerine Carbon Model (RivCM), which is directly coupled to the well-established dynamic vegetation and hydrology model LPJmL, in order to account for this large-scale coupling. We evaluate RivCM with observational data and show that some of the values are reproduced quite well by the model, while we see large deviations for other variables. This is mainly caused by some simplifications we assumed. Our evaluation shows that it is possible to reproduce large-scale carbon transport across a river system but that this involves large uncertainties. Acknowledging these uncertainties, we estimate the potential changes in riverine carbon by applying RivCM for climate forcing from five climate models and three CO2 emission scenarios (Special Report on Emissions Scenarios, SRES). We find that climate change causes a doubling of riverine organic carbon in the southern and western basin while reducing it by 20% in the eastern and northern parts. In contrast, the amount of riverine inorganic carbon shows a 2- to 3-fold increase in the entire basin, independent of the SRES scenario. The export of carbon to the atmosphere increases as well, with an average of about 30 %. In contrast, changes in future export of organic carbon to the Atlantic Ocean depend on the SRES scenario and are projected to either decrease by about 8.9% (SRES A1B) or increase by about 9.1% (SRES A2). Such changes in the terrigenous-riverine system could have local and regional impacts on the carbon budget of the whole Amazon basin and parts of the Atlantic Ocean. Changes in riverine carbon could lead to a shift in the riverine nutrient supply and pH, while changes in the exported carbon to the ocean lead to changes in the supply of organic material that acts as a food source in the Atlantic. On larger scales the increased outgassing of CO2 could turn the Amazon basin from a sink of carbon to a considerable source. Therefore, we propose that the coupling of terrestrial and riverine carbon budgets should be included in subsequent analysis of the future regional carbon budget.
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
The aim of the present study was to test the functional relevance of the spatial concepts UP or DOWN for words that use these concepts either literally (space) or metaphorically (time, valence). A functional relevance would imply a symmetrical relationship between the spatial concepts and words related to these concepts, showing that processing words activate the related spatial concepts on one hand, but also that an activation of the concepts will ease the retrieval of a related word on the other. For the latter, the rotation angle of participant's body position was manipulated either to an upright or a head-down tilted body position to activate the related spatial concept. Afterwards participants produced in a within-subject design previously memorized words of the concepts space, time and valence according to the pace of a metronome. All words were related either to the spatial concept UP or DOWN. The results including Bayesian analyses show (1) a significant interaction between body position and words using the concepts UP and DOWN literally, (2) a marginal significant interaction between body position and temporal words and (3) no effect between body position and valence words. However, post-hoc analyses suggest no difference between experiments. Thus, the authors concluded that integrating sensorimotor experiences is indeed of functional relevance for all three concepts of space, time and valence. However, the strength of this functional relevance depends on how close words are linked to mental concepts representing vertical space.
Background:
First metabolomics studies have indicated that metabolic fingerprints from accessible tissues might
be useful to better understand the etiological links between metabolism and cancer. However, there is still a lack
of prospective metabolomics studies on pre-diagnostic metabolic alterations and cancer risk.
Methods:
Associations between pre-diagnostic levels of 120 circulating metabolites (acylcarnitines, amino acids,
biogenic amines, phosphatidylcholines, sphingolipids, and hexoses) and the risks of breast, prostate, and colorectal
cancer were evaluated by Cox regression analyses using data of a prospective case-cohort study including 835
incident cancer cases.
Results:
The median follow-up duration was 8.3 years among non-cases and 6.5 years among incident cases of
cancer. Higher levels of lysophosphatidylcholines (lysoPCs), and especially lysoPC a C18:0, were consistently related
to lower risks of breast, prostate, and colorectal cancer, independent of background factors. In contrast, higher
levels of phosphatidylcholine PC ae C30:0 were associated with increased cancer risk. There was no heterogeneity
in the observed associations by lag time between blood draw and cancer diagnosis.
Conclusion:
Changes in blood lipid composition precede the diagnosis of common malignancies by several years.
Considering the consistency of the present results across three cancer types the observed alterations point to a
global metabolic shift in phosphatidylcholine metabolism that may drive tumorigenesis.
Recombination of free charge is a key process limiting the performance of solar cells. For low mobility materials, such as organic semiconductors, the kinetics of non-geminate recombination (NGR) is strongly linked to the motion of charges. As these materials possess significant disorder, thermalization of photogenerated carriers in the inhomogeneously broadened density of state distribution is an unavoidable process. Despite its general importance, knowledge about the kinetics of NGR in complete organic solar cells is rather limited. We employ time delayed collection field (TDCF) experiments to study the recombination of photogenerated charge in the high-performance polymer:fullerene blend PCDTBT:PCBM. NGR in the bulk of this amorphous blend is shown to be highly dispersive, with a continuous reduction of the recombination coefficient throughout the entire time scale, until all charge carriers have either been extracted or recombined. Rapid, contact-mediated recombination is identified as an additional loss channel, which, if not properly taken into account, would erroneously suggest a pronounced field dependence of charge generation. These findings are in stark contrast to the results of TDCF experiments on photovoltaic devices made from ordered blends, such as P3HT:PCBM, where non-dispersive recombination was proven to dominate the charge carrier dynamics under application relevant conditions.
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km x 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
Comparative literature on institutional reforms in multi-level systems proceeds from a global trend towards the decentralization of state functions. However, there is only scarce knowledge about the impact that decentralization has had, in particular, upon the sub-central governments involved. How does it affect regional and local governments? Do these reforms also have unintended outcomes on the sub-central level and how can this be explained? This article aims to develop a conceptual framework to assess the impacts of decentralization on the sub-central level from a comparative and policyoriented perspective. This framework is intended to outline the major patterns and models of decentralization and the theoretical assumptions regarding de-/re-centralization impacts, as well as pertinent cross-country approaches meant to evaluate and compare institutional reforms. It will also serve as an analytical guideline and a structural basis for all the country-related articles in this Special Issue.
Kommunikative Vernunft
(2016)
Jürgen Habermas explicates the concept of communicative reason. He explains the key assumptions of the philosophy of language and social theory associated with this concept. Also discussed is the category of life-world and the role of the body-mind difference for the consciousness of exclusivity in our access to subjective experience. as well as the role of emotions and perceptions in the context of a theory of communicative action. The question of the redemption of the various validity claims as they are associated with the performance of speech acts is related to processes of social learning and to the role of negative experiences. Finally the interview deals with the relationship between religion and reason and the importance of religion in modern, post-secular societies. Questions about the philosophical culture of our present times are discussed at the end of the conversation.
Kritische Anthropologie?
(2016)
This article compares Max Horkheimer’s and Theodor W. Adorno’s foundation of the Frankfurt Critical Theory with Helmuth Plessner’s foundation of Philosophical Anthropology. While Horkheimer’s and Plessner’s paradigms are mutually incompatible, Adorno’s „negative dialectics“ and Plessner’s „negative anthropology“ (G. Gamm) can be seen as complementing one another. Jürgen Habermas at one point sketched a complementary relationship between his own publicly communicative theory of modern society and Plessner’s philosophy of nature and human expressivity, and though he then came to doubt this, he later reaffirmed it. Faced with the „life power“ in „high capitalism“ (Plessner), the ambitions for a public democracy in a pluralistic society have to be broadened from an argumentative focus (Habermas) to include the human condition and the expressive modes of our experience as essentially embodied persons. The article discusses some possible aspects of this complementarity under the title of a „critical anthropology“ (H. Schnädelbach)
A model analysis of mechanisms for radial microtubular patterns at root hair initiation sites
(2016)
Plant cells have two main modes of growth generating anisotropic structures. Diffuse growth where whole cell walls extend in specific directions, guided by anisotropically positioned cellulose fibers, and tip growth, with inhomogeneous addition of new cell wall material at the tip of the structure. Cells are known to regulate these processes via molecular signals and the cytoskeleton. Mechanical stress has been proposed to provide an input to the positioning of the cellulose fibers via cortical microtubules in diffuse growth. In particular, a stress feedback model predicts a circumferential pattern of fibers surrounding apical tissues and growing primordia, guided by the anisotropic curvature in such tissues. In contrast, during the initiation of tip growing root hairs, a star-like radial pattern has recently been observed. Here, we use detailed finite element models to analyze how a change in mechanical properties at the root hair initiation site can lead to star-like stress patterns in order to understand whether a stress-based feedback model can also explain the microtubule patterns seen during root hair initiation. We show that two independent mechanisms, individually or combined, can be sufficient to generate radial patterns. In the first, new material is added locally at the position of the root hair. In the second, increased tension in the initiation area provides a mechanism. Finally, we describe how a molecular model of Rho-of-plant (ROP) GTPases activation driven by auxin can position a patch of activated ROP protein basally along a 2D root epidermal cell plasma membrane, paving the way for models where mechanical and molecular mechanisms cooperate in the initial placement and outgrowth of root hairs.
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
Turkey has been severely affected by many natural hazards, in particular earthquakes and floods. Although there is a large body of literature on earthquake hazards and risks in Turkey, comparatively little is known about flood hazards and risks. Therefore, with this study it is aimed to investigate flood patterns, societal and economic impacts of flood hazards in Turkey, as well as providing a comparative overview of the temporal and spatial distribution of flood losses by analysing EM-DAT (Emergency Events Database) and TABB (Turkey Disaster Data Base) databases on disaster losses throughout Turkey for the years 1960-2014. The comparison of these two databases reveals big mismatches of the flood data, e.g. the reported number of events, number of affected people and economic loss, differ dramatically. With this paper, it has been explored reasons for mismatches. Biases and fallacies for loss data in the two databases has been discussed as well. Since loss data collection is gaining more and more attention, e.g. in the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), the study could offer a base-work for developing guidelines and procedures on how to standardize loss databases and implement across the other hazard events, as well as substantial insights for flood risk mitigation and adaptation studies in Turkey and will offer valuable insights for other (European) countries.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Species can adjust their traits in response to selection which may strongly influence species coexistence. Nevertheless, current theory mainly assumes distinct and time-invariant trait values. We examined the combined effects of the range and the speed of trait adaptation on species coexistence using an innovative multispecies predator–prey model. It allows for temporal trait changes of all predator and prey species and thus simultaneous coadaptation within and among trophic levels. We show that very small or slow trait adaptation did not facilitate coexistence because the stabilizing niche differences were not sufficient to offset the fitness differences. In contrast, sufficiently large and fast trait adaptation jointly promoted stable or neutrally stable species coexistence. Continuous trait adjustments in response to selection enabled a temporally variable convergence and divergence of species traits; that is, species became temporally more similar (neutral theory) or dissimilar (niche theory) depending on the selection pressure, resulting over time in a balance between niche differences stabilizing coexistence and fitness differences promoting competitive exclusion. Furthermore, coadaptation allowed prey and predator species to cluster into different functional groups. This equalized the fitness of similar species while maintaining sufficient niche differences among functionally different species delaying or preventing competitive exclusion. In contrast to previous studies, the emergent feedback between biomass and trait dynamics enabled supersaturated coexistence for a broad range of potential trait adaptation and parameters. We conclude that accounting for trait adaptation may explain stable and supersaturated species coexistence for a broad range of environmental conditions in natural systems when the absence of such adaptive changes would preclude it. Small trait changes, coincident with those that may occur within many natural populations, greatly enlarged the number of coexisting species.
Referential Choice
(2016)
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
Well-developed phonological awareness skills are a core prerequisite for early literacy development. Although effective phonological awareness training programs exist, children at risk often do not reach similar levels of phonological awareness after the intervention as children with normally developed skills. Based on theoretical considerations and first promising results the present study explores effects of an early musical training in combination with a conventional phonological training in children with weak phonological awareness skills. Using a quasi-experimental pretest-posttest control group design and measurements across a period of 2 years, we tested the effects of two interventions: a consecutive combination of a musical and a phonological training and a phonological training alone. The design made it possible to disentangle effects of the musical training alone as well the effects of its combination with the phonological training. The outcome measures of these groups were compared with the control group with multivariate analyses, controlling for a number of background variables. The sample included N = 424 German-speaking children aged 4–5 years at the beginning of the study. We found a positive relationship between musical abilities and phonological awareness. Yet, whereas the well-established phonological training produced the expected effects, adding a musical training did not contribute significantly to phonological awareness development. Training effects were partly dependent on the initial level of phonological awareness. Possible reasons for the lack of training effects in the musical part of the combination condition as well as practical implications for early literacy education are discussed.
Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.
Strategic sexual signals
(2016)
The color red has special meaning in mating-relevant contexts. Wearing red can enhance perceptions of women's attractiveness and desirability as a potential romantic partner. Building on recent findings, the present study examined whether women's (N = 74) choice to display the color red is influenced by the attractiveness of an expected opposite-sex interaction partner. Results indicated that female participants who expected to interact with an attractive man displayed red (on clothing, accessories, and/or makeup) more often than a baseline consisting of women in a natural environment with no induced expectation. In contrast, when women expected to interact with an unattractive man, they eschewed red, displaying it less often than in the baseline condition. Findings are discussed with respect to evolutionary and cultural perspectives on mate evaluation and selection.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
Background: Given the well-established association between perceived stress and quality of life (QoL) in dementia patients and their partners, our goal was to identify whether relationship quality and dyadic coping would operate as mediators between perceived stress and QoL.
Methods: 82 dyads of dementia patients and their spousal caregivers were included in a cross-sectional assessment from a prospective study. QoL was assessed with the Quality of Life in Alzheimer's Disease scale (QoL-AD) for dementia patients and the WHO Quality of Life-BREF for spousal caregivers. Perceived stress was measured with the Perceived Stress Scale (PSS-14). Both partners were assessed with the Dyadic Coping Inventory (DCI). Analyses of correlation as well as regression models including mediator analyses were performed.
Results: We found negative correlations between stress and QoL in both partners (QoL-AD: r = -0.62; p < 0.001; WHO-QOL Overall: r = -0.27; p = 0.02). Spousal caregivers had a significantly lower DCI total score than dementia patients (p < 0.001). Dyadic coping was a significant mediator of the relationship between stress and QoL in spousal caregivers (z = 0.28; p = 0.02), but not in dementia patients. Likewise, relationship quality significantly mediated the relationship between stress and QoL in caregivers only (z = -2.41; p = 0.02).
Conclusions: This study identified dyadic coping as a mediator on the relationship between stress and QoL in (caregiving) partners of dementia patients. In patients, however, we found a direct negative effect of stress on QoL. The findings suggest the importance of stress reducing and dyadic interventions for dementia patients and their partners, respectively.
Classification of clouds, cirrus, snow, shadows and clear sky areas is a crucial step in the pre-processing of optical remote sensing images and is a valuable input for their atmospheric correction. The Multi-Spectral Imager on board the Sentinel-2's of the Copernicus program offers optimized bands for this task and delivers unprecedented amounts of data regarding spatial sampling, global coverage, spectral coverage, and repetition rate. Efficient algorithms are needed to process, or possibly reprocess, those big amounts of data. Techniques based on top-of-atmosphere reflectance spectra for single-pixels without exploitation of external data or spatial context offer the largest potential for parallel data processing and highly optimized processing throughput. Such algorithms can be seen as a baseline for possible trade-offs in processing performance when the application of more sophisticated methods is discussed. We present several ready-to-use classification algorithms which are all based on a publicly available database of manually classified Sentinel-2A images. These algorithms are based on commonly used and newly developed machine learning techniques which drastically reduce the amount of time needed to update the algorithms when new images are added to the database. Several ready-to-use decision trees are presented which allow to correctly label about 91% of the spectra within a validation dataset. While decision trees are simple to implement and easy to understand, they offer only limited classification skill. It improves to 98% when the presented algorithm based on the classical Bayesian method is applied. This method has only recently been used for this task and shows excellent performance concerning classification skill and processing performance. A comparison of the presented algorithms with other commonly used techniques such as random forests, stochastic gradient descent, or support vector machines is also given. Especially random forests and support vector machines show similar classification skill as the classical Bayesian method.
Linked linear mixed models
(2016)
The complexity of eye-movement control during reading allows measurement of many dependent variables, the most prominent ones being fixation durations and their locations in words. In current practice, either variable may serve as dependent variable or covariate for the other in linear mixed models (LMMs) featuring also psycholinguistic covariates of word recognition and sentence comprehension. Rather than analyzing fixation location and duration with separate LMMs, we propose linking the two according to their sequential dependency. Specifically, we include predicted fixation location (estimated in the first LMM from psycholinguistic covariates) and its associated residual fixation location as covariates in the second, fixation-duration LMM. This linked LMM affords a distinction between direct and indirect effects (mediated through fixation location) of psycholinguistic covariates on fixation durations. Results confirm the robustness of distributed processing in the perceptual span. They also offer a resolution of the paradox of the inverted optimal viewing position (IOVP) effect (i.e., longer fixation durations in the center than at the beginning and end of words) although the opposite (i.e., an OVP effect) is predicted from default assumptions of psycholinguistic processing efficiency: The IOVP effect in fixation durations is due to the residual fixation-location covariate, presumably driven primarily by saccadic error, and the OVP effect (at least the left part of it) is uncovered with the predicted fixation-location covariate, capturing the indirect effects of psycholinguistic covariates. We expect that linked LMMs will be useful for the analysis of other dynamically related multiple outcomes, a conundrum of most psychonomic research.
A series of new sulfobetaine methacrylates, including nitrogen-containing saturated heterocycles, was synthesised by systematically varying the substituents of the zwitterionic group. Radical polymerisation via the RAFT (reversible addition–fragmentation chain transfer) method in trifluoroethanol proceeded smoothly and was well controlled, yielding polymers with predictable molar masses. Molar mass analysis and control of the end-group fidelity were facilitated by end-group labeling with a fluorescent dye. The polymers showed distinct thermo-responsive behaviour of the UCST (upper critical solution temperature) type in an aqueous solution, which could not be simply correlated to their molecular structure via an incremental analysis of the hydrophilic and hydrophobic elements incorporated within them. Increasing the spacer length separating the ammonium and the sulfonate groups of the zwitterion moiety from three to four carbons increased the phase transition temperatures markedly, whereas increasing the length of the spacer separating the ammonium group and the carboxylate ester group on the backbone from two to three carbons provoked the opposite effect. Moreover, the phase transition temperatures of the analogous polyzwitterions decreased in the order dimethylammonio > morpholinio > piperidinio alkanesulfonates. In addition to the basic effect of the polymers’ precise molecular structure, the concentration and the molar mass dependence of the phase transition temperatures were studied. Furthermore, we investigated the influence of added low molar mass salts on the aqueous-phase behaviour for sodium chloride and sodium bromide as well as sodium and ammonium sulfate. The strong effects evolved in a complex way with the salt concentration. The strength of these effects depended on the nature of the anion added, increasing in the order sulfate < chloride < bromide, thus following the empirical Hofmeister series. In contrast, no significant differences were observed when changing the cation, i.e. when adding sodium or ammonium sulfate.
The visceral protein transthyretin (TTR) is frequently affected by oxidative post-translational protein modifications (PTPMs) in various diseases. Thus, better insight into structure-function relationships due to oxidative PTPMs of TTR should contribute to the understanding of pathophysiologic mechanisms. While the in vivo analysis of TTR in mammalian models is complex, time- and resource-consuming, transgenic Caenorhabditis elegans expressing hTTR provide an optimal model for the in vivo identification and characterization of drug-mediated oxidative PTPMs of hTTR by means of matrix assisted laser desorption/ionization – time of flight – mass spectrometry (MALDI-TOF-MS). Herein, we demonstrated that hTTR is expressed in all developmental stages of Caenorhabditis elegans, enabling the analysis of hTTR metabolism during the whole life-cycle. The suitability of the applied model was verified by exposing worms to D-penicillamine and menadione. Both drugs induced substantial changes in the oxidative PTPM pattern of hTTR. Additionally, for the first time a covalent binding of both drugs with hTTR was identified and verified by molecular modelling.
The present study approaches the Spanish postposed constructions creo Ø and creo yo ‘[p], [I] think’ from a cognitive-constructionist perspective. It is argued that both constructions are to be distinguished from one another because creo Ø has a subjective function, while in creo yo, it is the intersubjective dimension that is particularly prominent. The present investigation takes both a qualitative and a quantitative perspective. With regard to the latter, the problem of quantitative representativity is addressed. The discussion posed the question of how empirical research can feed back into theory, more precisely, into the framework of Cognitive Construction Grammar. The data to be analyzed here are retrieved from the corpora Corpus de Referencia del Español Actual and Corpus del Español.
The molecular ability to selectively and efficiently convert sunlight into other forms of energy like heat, bond change, or charge separation is truly remarkable. The decisive steps in these transformations often happen on a femtosecond timescale and require transitions among different electronic states that violate the Born-Oppenheimer approximation (BOA). Non-BOA transitions pose challenges to both theory and experiment. From a theoretical point of view, excited state dynamics and nonadiabatic transitions both are difficult problems (see Figure 1(a)). However, the theory on non-BOA dynamics has advanced significantly over the last two decades. Full dynamical simulations for molecules of the size of nucleobases have been possible for a couple of years and allow predictions of experimental observables like photoelectron energy or ion yield. The availability of these calculations for isolated molecules has spurred new experimental efforts to develop methods that are sufficiently different from all optical techniques. For determination of transient molecular structure, femtosecond X-ray diffraction and electron diffraction have been implemented on optically excited molecules.
Hantaviruses are zoonotic viruses transmitted to humans by persistently infected rodents, giving rise to serious outbreaks of hemorrhagic fever with renal syndrome (HFRS) or of hantavirus pulmonary syndrome (HPS), depending on the virus, which are associated with high case fatality rates. There is only limited knowledge about the organization of the viral particles and in particular, about the hantavirus membrane fusion glycoprotein Gc, the function of which is essential for virus entry. We describe here the X-ray structures of Gc from Hantaan virus, the type species hantavirus and responsible for HFRS, both in its neutral pH, monomeric pre-fusion conformation, and in its acidic pH, trimeric post-fusion form. The structures confirm the prediction that Gc is a class II fusion protein, containing the characteristic beta-sheet rich domains termed I, II and III as initially identified in the fusion proteins of arboviruses such as alpha-and flaviviruses. The structures also show a number of features of Gc that are distinct from arbovirus class II proteins. In particular, hantavirus Gc inserts residues from three different loops into the target membrane to drive fusion, as confirmed functionally by structure-guided mutagenesis on the HPS-inducing Andes virus, instead of having a single "fusion loop". We further show that the membrane interacting region of Gc becomes structured only at acidic pH via a set of polar and electrostatic interactions. Furthermore, the structure reveals that hantavirus Gc has an additional N-terminal "tail" that is crucial in stabilizing the post-fusion trimer, accompanying the swapping of domain III in the quaternary arrangement of the trimer as compared to the standard class II fusion proteins. The mechanistic understandings derived from these data are likely to provide a unique handle for devising treatments against these human pathogens.
Skipping a grade, one specific form of acceleration, is an intervention used for gifted students. Quantitative research has shown acceleration to be a highly successful intervention regarding academic achievement, but less is known about the social-emotional outcomes of grade-skipping. In the present study, the authors used the grounded theory approach to examine the experiences of seven gifted students aged 8 to 16 years who skipped a grade. The interviewees perceived their feeling of being in the wrong place before the grade-skipping as strongly influenced by their teachers, who generally did not respond adequately to their needs. We observed a close interrelationship between the gifted students' intellectual fit and their social situation in class. Findings showed that the grade-skipping in most of the cases bettered the situation in school intellectually as well as socially, but soon further interventions, for instance, a specialized and demanding class- or subject-specific acceleration were added to provide sufficiently challenging learning opportunities.
Background
It has been demonstrated that core strength training is an effective means to enhance trunk muscle strength (TMS) and proxies of physical fitness in youth. Of note, cross-sectional studies revealed that the inclusion of unstable elements in core strengthening exercises produced increases in trunk muscle activity and thus provide potential extra training stimuli for performance enhancement. Thus, utilizing unstable surfaces during core strength training may even produce larger performance gains. However, the effects of core strength training using unstable surfaces are unresolved in youth. This randomized controlled study specifically investigated the effects of core strength training performed on stable surfaces (CSTS) compared to unstable surfaces (CSTU) on physical fitness in school-aged children.
Methods
Twenty-seven (14 girls, 13 boys) healthy subjects (mean age: 14 ± 1 years, age range: 13–15 years) were randomly assigned to a CSTS (n = 13) or a CSTU (n = 14) group. Both training programs lasted 6 weeks (2 sessions/week) and included frontal, dorsal, and lateral core exercises. During CSTU, these exercises were conducted on unstable surfaces (e.g., TOGU© DYNAIR CUSSIONS, THERA-BAND© STABILITY TRAINER).
Results
Significant main effects of Time (pre vs. post) were observed for the TMS tests (8-22%, f = 0.47-0.76), the jumping sideways test (4-5%, f = 1.07), and the Y balance test (2-3%, f = 0.46-0.49). Trends towards significance were found for the standing long jump test (1-3%, f = 0.39) and the stand-and-reach test (0-2%, f = 0.39). We could not detect any significant main effects of Group. Significant Time x Group interactions were detected for the stand-and-reach test in favour of the CSTU group (2%, f = 0.54).
Conclusions
Core strength training resulted in significant increases in proxies of physical fitness in adolescents. However, CSTU as compared to CSTS had only limited additional effects (i.e., stand-and-reach test). Consequently, if the goal of training is to enhance physical fitness, then CSTU has limited advantages over CSTS.
Effects of resistance training in youth athletes on muscular fitness and athletic performance
(2016)
During the stages of long-term athlete development (LTAD), resistance training (RT) is an important means for (i) stimulating athletic development, (ii) tolerating the demands of long-term training and competition, and (iii) inducing long-term health promoting effects that are robust over time and track into adulthood. However, there is a gap in the literature with regards to optimal RT methods during LTAD and how RT is linked to biological age. Thus, the aims of this scoping review were (i) to describe and discuss the effects of RT on muscular fitness and athletic performance in youth athletes, (ii) to introduce a conceptual model on how to appropriately implement different types of RT within LTAD stages, and (iii) to identify research gaps from the existing literature by deducing implications for future research. In general, RT produced small -to -moderate effects on muscular fitness and athletic performance in youth athletes with muscular strength showing the largest improvement. Free weight, complex, and plyometric training appear to be well -suited to improve muscular fitness and athletic performance. In addition, balance training appears to be an important preparatory (facilitating) training program during all stages of LTAD but particularly during the early stages. As youth athletes become more mature, specificity, and intensity of RT methods increase. This scoping review identified research gaps that are summarized in the following and that should be addressed in future studies: (i) to elucidate the influence of gender and biological age on the adaptive potential following RT in youth athletes (especially in females), (ii) to describe RT protocols in more detail (i.e., always report stress and strain based parameters), and (iii) to examine neuromuscular and tendomuscular adaptations following RT in youth athletes.
Can the statistical properties of single-electron transfer events be correctly predicted within a common equilibrium ensemble description? This fundamental in nanoworld question of ergodic behavior is scrutinized within a very basic semi-classical curve-crossing problem. It is shown that in the limit of non-adiabatic electron transfer (weak tunneling) well-described by the Marcus–Levich–Dogonadze(MLD) rate the answer is yes. However, in the limit of the so-called solvent-controlled adiabatic electron transfer, a profound breaking of ergodicity occurs. Namely, a common description based on the ensemble reduced density matrix with an initial equilibrium distribution of the reaction coordinate is not able to reproduce the statistics of single-trajectory events in this seemingly classical regime. For sufficiently large activation barriers, the ensemble survival probability in a state remains nearly exponential with the inverse rate given by the sum of the adiabatic curve crossing (Kramers) time and the inverse MLD rate. In contrast, near to the adiabatic regime, the single-electron survival probability is clearly non-exponential, even though it possesses an exponential tail which agrees well with the ensemble description. Initially, it is well described by a Mittag-Leffler distribution with a fractional rate. Paradoxically, the mean transfer time in this classical on the ensemble level regime is well described by the inverse of the nonadiabatic quantum tunneling rate on a single particle level. An analytical theory is developed which perfectly agrees with stochastic simulations and explains our findings.
The semantics of focus particles like only requires a set of alternatives (Rooth, 1992). In two experiments, we investigated the impact of such particles on the retrieval of alternatives that are mentioned in the prior context or unmentioned. The first experiment used a probe recognition task and showed that focus particles interfere with the recognition of mentioned alternatives and the rejection of unmentioned alternatives relative to a condition without a particle. A second lexical decision experiment demonstrated priming effects for mentioned and unmentioned alternatives (compared with an unrelated condition) while focus particles caused additional interference effects. Overall, our results indicate that focus particles trigger an active search for alternatives and lead to a competition between mentioned alternatives, unmentioned alternatives, and the focused element.
Decreasing groundwater levels in many parts of Germany and decreasing low flows in Central Europe have created a need for adaptation measures to stabilize the water balance and to increase low flows. The objective of our study was to estimate the impact of ditch water level management on stream-aquifer interactions in small lowland catchments of the mid-latitudes. The water balance of a ditch-irrigated area and fluxes between the subsurface and the adjacent stream were modeled for three runoff recession periods using the Hydrus-2D software package. The results showed that the subsurface flow to the stream was closely related to the difference between the water level in the ditch system and the stream. Evapotranspiration during the growing season additionally reduced base flow. It was crucial to stop irrigation during a recession period to decrease water withdrawal from the stream and enhance the base flow by draining the irrigated area. Mean fluxes to the stream were between 0.04 and 0.64 ls(-1) for the first 20 days of the low-flow periods. This only slightly increased the flow in the stream, whose mean was 57 ls(-1) during the period with the lowest flows. Larger areas would be necessary to effectively increase flows in mesoscale catchments.
What are the physical laws of the mutual interactions of objects bound to cell membranes, such as various membrane proteins or elongated virus particles? To rationalise this, we here investigate by extensive computer simulations mutual interactions of rod-like particles adsorbed on the surface of responsive elastic two-dimensional sheets. Specifically, we quantify sheet deformations as a response to adhesion of such filamentous particles. We demonstrate that tip-to-tip contacts of rods are favoured for relatively soft sheets, while side-by-side contacts are preferred for stiffer elastic substrates. These attractive orientation-dependent substrate-mediated interactions between the rod-like particles on responsive sheets can drive their aggregation and self-assembly. The optimal orientation of the membrane-bound rods is established via responding to the elastic energy profiles created around the particles. We unveil the phase diagramme of attractive–repulsive rod–rod interactions in the plane of their separation and mutual orientation. Applications of our results to other systems featuring membrane-associated particles are also discussed.
Due to their multifunctionality, tablets offer tremendous advantages for research on handwriting dynamics or for interactive use of learning apps in schools. Further, the widespread use of tablet computers has had a great impact on handwriting in the current generation. But, is it advisable to teach how to write and to assess handwriting in pre- and primary schoolchildren on tablets rather than on paper? Since handwriting is not automatized before the age of 10 years, children's handwriting movements require graphomotor and visual feedback as well as permanent control of movement execution during handwriting. Modifications in writing conditions, for instance the smoother writing surface of a tablet, might influence handwriting performance in general and in particular those of non-automatized beginning writers. In order to investigate how handwriting performance is affected by a difference in friction of the writing surface, we recruited three groups with varying levels of handwriting automaticity: 25 preschoolers, 27 second graders, and 25 adults. We administered three tasks measuring graphomotor abilities, visuomotor abilities, and handwriting performance (only second graders and adults). We evaluated two aspects of handwriting performance: the handwriting quality with a visual score and the handwriting dynamics using online handwriting measures [e.g., writing duration, writing velocity, strokes and number of inversions in velocity (NIV)]. In particular, NIVs which describe the number of velocity peaks during handwriting are directly related to the level of handwriting automaticity. In general, we found differences between writing on paper compared to the tablet. These differences were partly task-dependent. The comparison between tablet and paper revealed a faster writing velocity for all groups and all tasks on the tablet which indicates that all participants—even the experienced writers—were influenced by the lower friction of the tablet surface. Our results for the group-comparison show advancing levels in handwriting automaticity from preschoolers to second graders to adults, which confirms that our method depicts handwriting performance in groups with varying degrees of handwriting automaticity. We conclude that the smoother tablet surface requires additional control of handwriting movements and therefore might present an additional challenge for learners of handwriting.
An egalitarian approach to the fair representation of voters specifies three main institutional requirements: proportional representation, legislative majority rule and a parliamentary system of government. This approach faces two challenges: the under-determination of the resulting democratic process and the idea of a trade-off between equal voter representation and government accountability. Linking conceptual with comparative analysis, the article argues that we can distinguish three ideal-typical varieties of the egalitarian vision of democracy, based on the stages at which majorities are formed. These varieties do not put different relative normative weight onto equality and accountability, but have different conceptions of both values and their reconciliation. The view that accountability is necessarily linked to clarity of responsibility', widespread in the comparative literature, is questioned - as is the idea of a general trade-off between representation and accountability. Depending on the vision of democracy, the two values need not be in conflict.
The agricultural transition profoundly changed human societies. We sequenced and analysed the first genome (1.39x) of an early Neolithic woman from Ganj Dareh, in the Zagros Mountains of Iran, a site with early evidence for an economy based on goat herding, ca. 10,000 BP. We show that Western Iran was inhabited by a population genetically most similar to hunter-gatherers from the Caucasus, but distinct from the Neolithic Anatolian people who later brought food production into Europe. The inhabitants of Ganj Dareh made little direct genetic contribution to modern European populations, suggesting those of the Central Zagros were somewhat isolated from other populations of the Fertile Crescent. Runs of homozygosity are of a similar length to those from Neolithic farmers, and shorter than those of Caucasus and Western Hunter-Gatherers, suggesting that the inhabitants of Ganj Dareh did not undergo the large population bottleneck suffered by their northern neighbours. While some degree of cultural diffusion between Anatolia, Western Iran and other neighbouring regions is possible, the genetic dissimilarity between early Anatolian farmers and the inhabitants of Ganj Dareh supports a model in which Neolithic societies in these areas were distinct.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
Processes involved in late bilinguals' production of morphologically complex words were studied using an event-related brain potentials (ERP) paradigm in which EEGs were recorded during participants' silent productions of English past- and present-tense forms. Twenty-three advanced second language speakers of English (first language [L1] German) were compared to a control group of 19 L1 English speakers from an earlier study. We found a frontocentral negativity for regular relative to irregular past-tense forms (e.g., asked vs. held) during (silent) production, and no difference for the present-tense condition (e.g., asks vs. holds), replicating the ERP effect obtained for the L1 group. This ERP effect suggests that combinatorial processing is involved in producing regular past-tense forms, in both late bilinguals and L1 speakers. We also suggest that this paradigm is a useful tool for future studies of online language production.
Processes involved in late bilinguals' production of morphologically complex words were studied using an event-related brain potentials (ERP) paradigm in which EEGs were recorded during participants' silent productions of English past- and present-tense forms. Twenty-three advanced second language speakers of English (first language [L1] German) were compared to a control group of 19 L1 English speakers from an earlier study. We found a frontocentral negativity for regular relative to irregular past-tense forms (e.g., asked vs. held) during (silent) production, and no difference for the present-tense condition (e.g., asks vs. holds), replicating the ERP effect obtained for the L1 group. This ERP effect suggests that combinatorial processing is involved in producing regular past-tense forms, in both late bilinguals and L1 speakers. We also suggest that this paradigm is a useful tool for future studies of online language production.
The concept of similitude is commonly employed in the fields of fluid dynamics and engineering but rarely used in cryospheric research. Here we apply this method to the problem of ice flow to examine the dynamic similitude of isothermal ice sheets in shallow-shelf approximation against the scaling of their geometry and physical parameters. Carrying out a dimensional analysis of the stress balance we obtain dimensionless numbers that characterize the flow. Requiring that these numbers remain the same under scaling we obtain conditions that relate the geometric scaling factors, the parameters for the ice softness, surface mass balance and basal friction as well as the ice-sheet intrinsic response time to each other. We demonstrate that these scaling laws are the same for both the (two-dimensional) flow-line case and the three-dimensional case. The theoretically predicted ice-sheet scaling behavior agrees with results from numerical simulations that we conduct in flow-line and three-dimensional conceptual setups. We further investigate analytically the implications of geometric scaling of ice sheets for their response time. With this study we provide a framework which, under several assumptions, allows for a fundamental comparison of the ice-dynamic behavior across different scales. It proves to be useful in the design of conceptual numerical model setups and could also be helpful for designing laboratory glacier experiments. The concept might also be applied to real-world systems, e.g., to examine the response times of glaciers, ice streams or ice sheets to climatic perturbations.
The strong adhesion of sub-micron sized particles to surfaces is a nuisance, both for removing contaminating colloids from surfaces and for conscious manipulation of particles to create and test novel micro/nano-scale assemblies. The obvious idea of using detergents to ease these processes suffers from a lack of control: the action of any conventional surface-modifying agent is immediate and global. With photosensitive azobenzene containing surfactants we overcome these limitations. Such photo-soaps contain optical switches (azobenzene molecules), which upon illumination with light of appropriate wavelength undergo reversible trans-cis photo-isomerization resulting in a subsequent change of the physico-chemical molecular properties. In this work we show that when a spatial gradient in the composition of trans- and cis- isomers is created near a solid-liquid interface, a substantial hydrodynamic flow can be initiated, the spatial extent of which can be set, e.g., by the shape of a laser spot. We propose the concept of light induced diffusioosmosis driving the flow, which can remove, gather or pattern a particle assembly at a solid-liquid interface. In other words, in addition to providing a soap we implement selectivity: particles are mobilized and moved at the time of illumination, and only across the illuminated area.
Do properties of individual languages shape the mechanisms by which they are processed? By virtue of their non-concatenative morphological structure, the recognition of complex words in Semitic languages has been argued to rely strongly on morphological information and on decomposition into root and pattern constituents. Here, we report results from a masked priming experiment in Hebrew in which we contrasted verb forms belonging to two morphological classes, Paal and Piel, which display similar properties, but crucially differ on whether they are extended to novel verbs. Verbs from the open-class Piel elicited familiar root priming effects, but verbs from the closed-class Paal did not. Our findings indicate that, similarly to other (e.g., Indo-European) languages, down-to-the-root decomposition in Hebrew does not apply to stems of non-productive verbal classes. We conclude that the Semitic word processor is less unique than previously thought: Although it operates on morphological units that are combined in a non-linear way, it engages the same universal mechanisms of storage and computation as those seen in other languages.
Magic screens
(2016)
Garcilaso de la Vega el Inca, for several centuries doubtlessly the most discussed and most eminent writer of Andean America in the 16th and 17th centuries, throughout his life set the utmost value on the fact that he descended matrilineally from Atahualpa Yupanqui and from the last Inca emperor, Huayna Capac. Thus, both in his person and in his creative work he combined different cultural worlds in a polylogical way. (1) Two painters boasted that very same Inca descent - they were the last two great masters of the Cuzco school of painting, which over several generations of artists had been an institution of excellent renown and prestige, and whose economic downfall and artistic marginalization was vividly described by the French traveller Paul Mancoy in 1837.(2) While, during the 18th century, Cuzco school paintings were still much cherished and sought after, by the beginning of the following century the elite of Lima regarded them as behind the times and provincial, committed to an 'indigenous' painting style. The artists from up-country - such was the reproach - could not keep up with the modern forms of seeing and creating, as exemplified by European paragons. Yet, just how 'provincial', truly, was this art?
The spider mite Tetranychus urticae Koch and the aphid Myzus persicae (Sulzer) both infest a number of economically significant crops, including tomato (Solanurn lycopersicum). Although used for decades to control pests, the impact of green lacewing larvae Chrysoperla carnea (Stephens) on plant biochemistry was not investigated. Here, we used profiling methods and targeted analyses to explore the impact of the predator and herbivore(s)-predator interactions on tomato biochemistry. Each pest and pest -predator combination induced a characteristic metabolite signature in the leaf and the fruit thus, the plant exhibited a systemic response. The treatments had a stronger impact on non-volatile metabolites including abscisic acid and amino acids in the leaves in comparison with the fruits. In contrast, the various biotic factors had a greater impact on the carotenoids in the fruits. We identified volatiles such as myrcene and alpha-terpinene which were induced by pest -predator interactions but not by single species, and we demonstrated the involvement of the phytohormone abscisic acid in tritrophic interactions for the first time. More importantly, C. carnea larvae alone impacted the plant metabolome, but the predator did not appear to elicit particular defense pathways on its own. Since the presence of both C. carnea larvae and pest individuals elicited volatiles which were shown to contribute to plant defense, C. carnea larvae could therefore contribute to the reduction of pest infestation, not only by its preying activity, but also by priming responses to generalist herbivores such as T urticae and M. persicae. On the other hand, the use of C. carnea larvae alone did not impact carotenoids thus, was not prejudicial to the fruit quality. The present piece of research highlights the specific impact of predator and tritrophic interactions with green lacewing larvae, spider mites, and aphids on different components of the tomato primary and secondary metabolism for the first time, and provides cues for further in-depth studies aiming to integrate entomological approaches and plant biochemistry.
Linking together the processes of rapid physical erosion and the resultant chemical dissolution of rock is a crucial step in building an overall deterministic understanding of weathering in mountain belts. Landslides, which are the most volumetrically important geomorphic process at these high rates of erosion, can generate extremely high rates of very localised weathering. To elucidate how this process works we have taken advantage of uniquely intense landsliding, resulting from Typhoon Morakot, in the T'aimali River and surrounds in southern Taiwan. Combining detailed analysis of landslide seepage chemistry with estimates of catchment-by-catchment landslide volumes, we demonstrate that in this setting the primary role of landslides is to introduce fresh, highly labile mineral phases into the surface weathering environment. There, rapid weathering is driven by the oxidation of pyrite and the resultant sulfuric-acid-driven dissolution of primarily carbonate rock. The total dissolved load correlates well with dissolved sulfate - the chief product of this style of weathering - in both landslides and streams draining the area (R-2 = 0.841 and 0.929 respectively; p < 0.001 in both cases), with solute chemistry in seepage from landslides and catchments affected by significant landsliding governed by the same weathering reactions. The predominance of coupled carbonate-sulfuric-acid-driven weathering is the key difference between these sites and previously studied landslides in New Zealand (Emberson et al., 2016), but in both settings increasing volumes of landslides drive greater overall solute concentrations in streams.
Bedrock landslides, by excavating deep below saprolite-rock interfaces, create conditions for weathering in which all mineral phases in a lithology are initially unweathered within landslide deposits. As a result, the most labile phases dominate the weathering immediately after mobilisation and during a transient period of depletion. This mode of dissolution can strongly alter the overall output of solutes from catchments and their contribution to global chemical cycles if landslide-derived material is retained in catchments for extended periods after mass wasting.
Fruits exhibit a vast array of different 3D shapes, from simple spheres and cylinders to more complex curved forms; however, the mechanism by which growth is oriented and coordinated to generate this diversity of forms is unclear. Here, we compare the growth patterns and orientations for two very different fruit shapes in the Brassicaceae: the heart-shaped Capsella rubella silicle and the near-cylindrical Arabidopsis thaliana silique. We show, through a combination of clonal and morphological analyses, that the different shapes involve different patterns of anisotropic growth during three phases. These experimental data can be accounted for by a tissue level model in which specified growth rates vary in space and time and are oriented by a proximodistal polarity field. The resulting tissue conflicts lead to deformation of the tissue as it grows. The model allows us to identify tissue-specific and temporally specific activities required to obtain the individual shapes. One such activity may be provided by the valve-identity gene FRUITFULL, which we show through comparative mutant analysis to modulate fruit shape during post-fertilisation growth of both species. Simple modulations of the model presented here can also broadly account for the variety of shapes in other Brassicaceae species, thus providing a simplified framework for fruit development and shape diversity.
In Near Edge X-Ray Absorption Fine Structure (NEXAFS) spectroscopy X-Ray photons are used to excite tightly bound core electrons to low-lying unoccupied orbitals of the system. This technique offers insight into the electronic structure of the system as well as useful structural information. In this work, we apply NEXAFS to two kinds of imidazolium based ionic liquids ([CnC1im]+[NTf2]- and [C4C1im]+[I]-). A combination of measurements and quantum chemical calculations of C K and N K NEXAFS resonances is presented. The simulations, based on the transition potential density functional theory method (TP-DFT), reproduce all characteristic features observed by the experiment. Furthermore, a detailed assignment of resonance features to excitation centers (carbon or nitrogen atoms) leads to a consistent interpretation of the spectra.
Sound matters
(2016)
This essay proposes a reorientation in postcolonial studies that takes account of the transcultural realities of the viral twenty-first century. This reorientation entails close attention to actual performances, their specific medial embeddedness, and their entanglement in concrete formal or informal material conditions. It suggests that rather than a focus on print and writing favoured by theories in the wake of the linguistic turn, performed lyrics and sounds may be better suited to guide the conceptual work. Accordingly, the essay chooses a classic of early twentieth-century digital music – M.I.A.’s 2003/2005 single “Galang” – as its guiding example. It ultimately leads up to a reflection on what Ravi Sundaram coined as “pirate modernity,” which challenges us to rethink notions of artistic authorship and authority, hegemony and subversion, culture and theory in the postcolonial world of today.
The speciation of 2-Mercaptopyridine in aqueous solution has been investigated with nitrogen 1s Near Edge X-ray Absorption Fine Structure spectroscopy and time dependent Density Functional Theory. The prevalence of distinct species as a function of the solvent basicity is established. No indications of dimerization towards high concentrations are found. The determination of different molecular structures of 2-Mercaptopyridine in aqueous solution is put into the context of proton-transfer in keto-enol and thione-thiol tautomerisms. (C) 2016 The Authors. Published by Elsevier B.V.
„Was ist Migration?“
(2016)
Auch wenn die Appelle, die Bedeutung von Migration für Erwachsenenbildung deutlicher wahrzunehmen, unüberhörbar sind, bleiben sie bezüglich kategorialer Arbeit bemerkenswert wenig beachtet. Grundlagentheoretisch motivierte Arbeit am Begriff „Migration“ ist in der Erwachsenenbildung noch lange nicht hinreichend ausgeschöpft. Auch wenn sich einzelne Studien mit ihm auseinandersetzen, besteht dennoch der Eindruck, dass kategoriale Klärungsversuche singulär bleiben. Die nicht einfache Aufgabe, den Begriff Migration vor seiner kategorialen Stilllegung zu bewahren, bleibt eine ernsthafte Herausforderung für erwachsenenpädagogische Migrationsforschung, sofern sie daran interessiert ist, die Risiken eines bisher essentialistischen Kurses ernsthaft ins Visier zu nehmen.
The extent of gene flow during the range expansion of non-native species influences the amount of genetic diversity retained in expanding populations. Here, we analyse the population genetic structure of the raccoon dog (Nyctereutes procyonoides) in north-eastern and central Europe. This invasive species is of management concern because it is highly susceptible to fox rabies and an important secondary host of the virus. We hypothesized that the large number of introduced animals and the species' dispersal capabilities led to high population connectivity and maintenance of genetic diversity throughout the invaded range. We genotyped 332 tissue samples from seven European countries using 16 microsatellite loci. Different algorithms identified three genetic clusters corresponding to Finland, Denmark and a large 'central' population that reached from introduction areas in western Russia to northern Germany. Cluster assignments provided evidence of long-distance dispersal. The results of an Approximate Bayesian Computation analysis supported a scenario of equal effective population sizes among different pre-defined populations in the large central cluster. Our results are in line with strong gene flow and secondary admixture between neighbouring demes leading to reduced genetic structuring, probably a result of its fairly rapid population expansion after introduction. The results presented here are remarkable in the sense that we identified a homogenous genetic cluster inhabiting an area stretching over more than 1500km. They are also relevant for disease management, as in the event of a significant rabies outbreak, there is a great risk of a rapid virus spread among raccoon dog populations.
Polysarcosine (Mn = 3650–20 000 g mol−1, Đ ∼ 1.1) was synthesized from the air and moisture stable N-phenoxycarbonyl-N-methylglycine. Polymerization was achieved by in situ transformation of the urethane precursor into the corresponding N-methylglycine-N-carboxyanhydride, when in the presence of a non-nucleophilic tertiary amine base and a primary amine initiator.
Effects of data and model simplification on the results of a wetland water resource management model
(2016)
This paper presents the development of a wetland water balance model for use in a large river basin with many different wetlands. The basic model was primarily developed for a single wetland with a complex water management system involving large amounts of specialized input data and water management details. The aim was to simplify the model structure and to use only commonly available data as input for the model, with the least possible loss of accuracy. Results from different variants of the model and data adaptation were tested against results from a detailed model. This shows that using commonly available data and unifying and simplifying the input data is tolerable up to a certain level. The simplification of the model has greater effects on the evaluated water balance components than the data adaptation. Because this simplification was necessary for large-scale use, we suggest that, for reasons of comparability, simpler models should always be applied with uniform data bases for large regions, though these should only be moderately simplified. Further, we recommend using these simplified models only for large-scale comparisons and using more specific, detailed models for investigations on smaller scales.
Background:
Exercising at intensities where fat oxidation rates are high has been shown to induce metabolic benefits in recreational and health-oriented sportsmen. The exercise intensity (Fat peak ) eliciting peak fat oxidation rates is therefore of particular interest when aiming to prescribe exercise for the purpose of fat oxidation and related metabolic effects. Although running and walking are feasible and popular among the target population, no reliable protocols are available to assess Fat peak as well as its actual velocity (V PFO ) during treadmill ergometry. Our purpose was therefore, to assess the reliability and day-to-day variability of V PFO and Fat peak during treadmill ergometry running.
Methods:
Sixteen recreational athletes (f = 7, m = 9; 25 ± 3 y; 1.76 ± 0.09 m; 68.3 ± 13.7 kg; 23.1 ± 2.9 kg/m 2 ) performed 2 different running protocols on 3 different days with standardized nutrition the day before testing. At day 1, peak oxygen uptake (VO 2peak ) and the velocities at the aerobic threshold (V LT ) and respiratory exchange ratio (RER) of 1.00 (V RER ) were assessed. At days 2 and 3, subjects ran an identical submaximal incremental test (Fat-peak test) composed of a 10 min warm-up (70 % V LT ) followed by 5 stages of 6 min with equal increments (stage 1 = V LT , stage 5 = V RER ). Breath-by-breath gas exchange data was measured continuously and used to determine fat oxidation rates. A third order polynomial function was used to identify V PFO and subsequently Fat peak . The reproducibility and variability of variables was verified with an int raclass correlation coef ficient (ICC), Pearson ’ s correlation coefficient, coefficient of variation (CV) an d the mean differences (bias) ± 95 % limits of agreement (LoA).
Results:
ICC, Pearson ’ s correlation and CV for V PFO and Fat peak were 0.98, 0.97, 5.0 %; and 0.90, 0.81, 7.0 %, respectively. Bias ± 95 % LoA was − 0.3 ± 0.9 km/h for V PFO and − 2±8%ofVO 2peak for Fat peak.
Conclusion:
In summary, relative and absolute reliability indicators for V PFO and Fat peak were found to be excellent. The observed LoA may now serve as a basis for future training prescriptions, although fat oxidation rates at prolonged exercise bouts at this intensity still need to be investigated.
We study the adsorption–desorption transition of polyelectrolyte chains onto planar, cylindrical and spherical surfaces with arbitrarily high surface charge densities by massive Monte Carlo computer simulations. We examine in detail how the well known scaling relations for the threshold transition—demarcating the adsorbed and desorbed domains of a polyelectrolyte near weakly charged surfaces—are altered for highly charged interfaces. In virtue of high surface potentials and large surface charge densities, the Debye–Hückel approximation is often not feasible and the nonlinear Poisson–Boltzmann approach should be implemented. At low salt conditions, for instance, the electrostatic potential from the nonlinear Poisson–Boltzmann equation is smaller than the Debye–Hückel result, such that the required critical surface charge density for polyelectrolyte adsorption σc increases. The nonlinear relation between the surface charge density and electrostatic potential leads to a sharply increasing critical surface charge density with growing ionic strength, imposing an additional limit to the critical salt concentration above which no polyelectrolyte adsorption occurs at all. We contrast our simulations findings with the known scaling results for weak critical polyelectrolyte adsorption onto oppositely charged surfaces for the three standard geometries. Finally, we discuss some applications of our results for some physical–chemical and biophysical systems.
Neisseria gonorrhoeae is one of the most prevalent sexually transmitted diseases worldwide with more than 100 million new infections per year. A lack of intense research over the last decades and increasing resistances to the recommended antibiotics call for a better understanding of gonococcal infection, fast diagnostics and therapeutic measures against N. gonorrhoeae. Therefore, the aim of this work was to identify novel immunogenic proteins as a first step to advance those unresolved problems. For the identification of immunogenic proteins, pHORF oligopeptide phage display libraries of the entire N. gonorrhoeae genome were constructed. Several immunogenic oligopeptides were identified using polyclonal rabbit antibodies against N. gonorrhoeae. Corresponding full-length proteins of the identified oligopeptides were expressed and their immunogenic character was verified by ELISA. The immunogenic character of six proteins was identified for the first time. Additional 13 proteins were verified as immunogenic proteins in N. gonorrhoeae.
Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and uncertainty. This study compared analyses of crowdsourced land cover data that were contributed by different groups, based on nationality (labelled Gondor and Non-Gondor) and on domain experience (labelled Expert and Non-Expert). The analyses used a geographically weighted model to generate maps of land cover and compared the maps generated by the different groups. The results highlight the differences between the maps how specific land cover classes were under-and over-estimated. As crowdsourced data and citizen science are increasingly used to replace data collected under the designed experiment, this paper highlights the importance of considering between group variations and their impacts on the results of analyses. Critically, differences in the way that landscape features are conceptualised by different groups of contributors need to be considered when using crowdsourced data in formal scientific analyses. The discussion considers the potential for variation in crowdsourced data, the relativist nature of land cover and suggests a number of areas for future research. The key finding is that the veracity of citizen science data is not the critical issue per se. Rather, it is important to consider the impacts of differences in the semantics, affordances and functions associated with landscape features held by different groups of crowdsourced data contributors.
The role that climate and environmental history may have played in influencing human evolution has been the focus of considerable interest and controversy among paleoanthropologists for decades. Prior attempts to understand the environmental history side of this equation have centered around the study of outcrop sediments and fossils adjacent to where fossil hominins (ancestors or close relatives of modern humans) are found, or from the study of deep sea drill cores. However, outcrop sediments are often highly weathered and thus are unsuitable for some types of paleoclimatic records, and deep sea core records come from long distances away from the actual fossil and stone tool remains. The Hominin Sites and Paleolakes Drilling Project (HSPDP) was developed to address these issues. The project has focused its efforts on the eastern African Rift Valley, where much of the evidence for early hominins has been recovered. We have collected about 2 km of sediment drill core from six basins in Kenya and Ethiopia, in lake deposits immediately adjacent to important fossil hominin and archaeological sites. Collectively these cores cover in time many of the key transitions and critical intervals in human evolutionary history over the last 4 Ma, such as the earliest stone tools, the origin of our own genus Homo, and the earliest anatomically modern Homo sapiens. Here we document the initial field, physical property, and core description results of the 2012-2014 HSPDP coring campaign.
We show that self-consistent partial synchrony in globally coupled oscillatory ensembles is a general phenomenon. We analyze in detail appearance and stability properties of this state in possibly the simplest setup of a biharmonic Kuramoto-Daido phase model as well as demonstrate the effect in limit-cycle relaxational Rayleigh oscillators. Such a regime extends the notion of splay state from a uniform distribution of phases to an oscillating one. Suitable collective observables such as the Kuramoto order parameter allow detecting the presence of an inhomogeneous distribution. The characteristic and most peculiar property of self-consistent partial synchrony is the difference between the frequency of single units and that of the macroscopic field.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
We investigate the ensemble and time averaged mean squared displacements for particle diffusion in a simple model for disordered media by assuming that the local diffusivity is both fluctuating in time and has a deterministic average growth or decay in time. In this study we compare computer simulations of the stochastic Langevin equation for this random diffusion process with analytical results. We explore the regimes of normal Brownian motion as well as anomalous diffusion in the sub- and superdiffusive regimes. We also consider effects of the inertial term on the particle motion. The investigation of the resulting diffusion is performed for unconfined and confined motion.
Background
Vitamin-D-binding protein (VDBP) is a low molecular weight protein that is filtered through the glomerulus as a 25-(OH) vitamin D 3/VDBP complex. In the normal kidney VDBP is reabsorbed and catabolized by proximal tubule epithelial cells reducing the urinary excretion to trace amounts. Acute tubular injury is expected to result in urinary VDBP loss. The purpose of our study was to explore the potential role of urinary VDBP as a biomarker of an acute renal damage.
Method
We included 314 patients with diabetes mellitus or mild renal impairment undergoing coronary angiography and collected blood and urine before and 24 hours after the CM application. Patients were followed for 90 days for the composite endpoint major adverse renal events (MARE: need for dialysis, doubling of serum creatinine after 90 days, unplanned emergency rehospitalization or death).
Results
Increased urine VDBP concentration 24 hours after contrast media exposure was predictive for dialysis need (no dialysis: 113.06 +/- 299.61ng/ml, n = 303; need for dialysis: 613.07 +/- 700.45 ng/ml, n = 11, Mean +/- SD, p < 0.001), death (no death during follow-up: 121.41 +/- 324.45 ng/ml, n = 306; death during follow-up: 522.01 +/- 521.86 ng/ml, n = 8; Mean +/- SD, p < 0.003) and MARE (no MARE: 112.08 +/- 302.00ng/ml, n = 298; MARE: 506.16 +/- 624.61 ng/ml, n = 16, Mean +/- SD, p < 0.001) during the follow-up of 90 days after contrast media exposure. Correction of urine VDBP concentrations for creatinine excretion confirmed its predictive value and was consistent with increased levels of urinary Kidney Injury Molecule1 (KIM-1) and baseline plasma creatinine in patients with above mentioned complications. The impact of urinary VDBP and KIM-1 on MARE was independent of known CIN risk factors such as anemia, preexisting renal failure, preexisting heart failure, and diabetes.
Conclusions
Urinary VDBP is a promising novel biomarker of major contrast induced nephropathy-associated events 90 days after contrast media exposure.
Since the economic crisis in 2008, European youth unemployment rates have been persistently high at around 20% on average. The majority of European countries spends significant resources each year on active labor market programs (ALMP) with the aim of improving the integration prospects of struggling youths. Among the most common programs used are training courses, job search assistance and monitoring, subsidized employment, and public work programs. For policy makers, it is of upmost importance to know which of these programs work and which are able to achieve the intended goals – may it be the integration into the first labor market or further education. Based on a detailed assessment of the particularities of the youth labor market situation, we discuss the pros and cons of different ALMP types. We then provide a comprehensive survey of the recent evidence on the effectiveness of these ALMP for youth in Europe, highlighting factors that seem to promote or impede their effectiveness in practice. Overall, the findings with respect to employment outcomes are only partly promising. While job search assistance (with and without monitoring) results in overwhelmingly positive effects, we find more mixed effects for training and wage subsidies, whereas the effects for public work programs are clearly negative. The evidence on the impact of ALMP on furthering education participation as well as employment quality is scarce, requiring additional research and allowing only limited conclusions so far.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Over the past ~40 years, several attempts were made to reintroduce Eurasian lynx to suitable habitat within their former distribution range in Western Europe. In general, limited numbers of individuals have been released to establish new populations. To evaluate the effects of reintroductions on the genetic status of lynx populations we used 12 microsatellite loci to study lynx populations in the Bohemian–Bavarian and Vosges–Palatinian forests. Compared with autochthonous lynx populations, these two reintroduced populations displayed reduced genetic diversity, particularly the Vosges–Palatinian population. Our genetic data provide further evidence to support the status of ‘endangered’ and ‘critically endangered’ for the Bohemian–Bavarian and Vosges–Palatinian populations, respectively. Regarding conservation management, we highlight the need to limit poaching, and advocate additional translocations to bolster genetic variability.
Preface
(2016)
The present study aimed to integrate findings from technology acceptance research with research on applicant reactions to new technology for the emerging selection procedure of asynchronous video interviewing. One hundred six volunteers experienced asynchronous video interviewing and filled out several questionnaires including one on the applicants' personalities. In line with previous technology acceptance research, the data revealed that perceived usefulness and perceived ease of use predicted attitudes toward asynchronous video interviewing. Furthermore, openness revealed to moderate the relation between perceived usefulness and attitudes toward this particular selection technology. No significant effects emerged for computer self-efficacy, job interview self efficacy, extraversion, neuroticism, and conscientiousness. Theoretical and practical implications are discussed.
Drugs as instruments
(2016)
Neuroenhancement (NE) is the non-medical use of psychoactive substances to produce a subjective enhancement in psychological functioning and experience. So far empirical investigations of individuals' motivation for NE however have been hampered by the lack of theoretical foundation. This study aimed to apply drug instrumentalization theory to user motivation for NE. We argue that NE should be defined and analyzed from a behavioral perspective rather than in terms of the characteristics of substances used for NE. In the empirical study we explored user behavior by analyzing relationships between drug options (use over-the-counter products, prescription drugs, illicit drugs) and postulated drug instrumentalization goals (e.g., improved cognitive performance, counteracting fatigue, improved social interaction). Questionnaire data from 1438 university students were subjected to exploratory and confirmatory factor analysis to address the question of whether analysis of drug instrumentalization should be based on the assumption that users are aiming to achieve a certain goal and choose their drug accordingly or whether NE behavior is more strongly rooted in a decision to try or use a certain drug option. We used factor mixture modeling to explore whether users could be separated into qualitatively different groups defined by a shared "goal X drug option" configuration. Our results indicate, first, that individuals decisions about NE are eventually based on personal attitude to drug options (e.g., willingness to use an over-the-counter product but not to abuse prescription drugs) rather than motivated by desire to achieve a specific goal (e.g., fighting tiredness) for which different drug options might be tried. Second, data analyses suggested two qualitatively different classes of users. Both predominantly used over-the-counter products, but "neuroenhancers" might be characterized by a higher propensity to instrumentalize over-the-counter products for virtually all investigated goals whereas "fatigue-fighters" might be inclined to use over-the-counter products exclusively to fight fatigue. We believe that psychological investigations like these are essential, especially for designing programs to prevent risky behavior.
Recent research has indicated that university students sometimes use caffeine pills for neuroenhancement (NE; non-medical use of psychoactive substances or technology to produce a subjective enhancement in psychological functioning and experience), especially during exam preparation. In our factorial survey experiment, we manipulated the evidence participants were given about the prevalence of NE amongst peers and measured the resulting effects on the psychological predictors included in the Prototype-Willingness Model of risk behavior. Two hundred and thirty-one university students were randomized to a high prevalence condition (read faked research results overstating usage of caffeine pills amongst peers by a factor of 5; 50%), low prevalence condition (half the estimated prevalence; 5%) or control condition (no information about peer prevalence). Structural equation modeling confirmed that our participants’ willingness and intention to use caffeine pills in the next exam period could be explained by their past use of neuroenhancers, attitude to NE and subjective norm about use of caffeine pills whilst image of the typical user was a much less important factor. Provision of inaccurate information about prevalence reduced the predictive power of attitude with respect to willingness by 40-45%. This may be because receiving information about peer prevalence which does not fit with their perception of the social norm causes people to question their attitude. Prevalence information might exert a deterrent effect on NE via the attitude-willingness association. We argue that research into NE and deterrence of associated risk behaviors should be informed by psychological theory.
In 2002 Germany adopted an ambitious national sustainability strategy, covering all three sustainability spheres and circling around 21 key indicators. The strategy stands out because of its relative stability over five consecutive government constellations, its high status and increasingly coercive nature. This article analyses the strategy's role in the policy process, focusing on the use and influence of indicators as a central steering tool. Contrasting rationalist and constructivist perspectives on the role of knowledge in policy, two factors, namely the level of consensus about policy goals and the institutional setting of the indicators, are found to explain differences in use and influence both across indicators and over time. Moreover, the study argues that the indicators have been part of a continuous process of ‘structuring’ in which conceptual and instrumental use together help structure the sustainability challenge in such a way that it becomes more manageable for government policy.
Background: Dementia is a psychiatric condition the development of which is associated with numerous aspects of life. Our aim was to estimate dementia risk factors in German primary care patients.
Methods: The case-control study included primary care patients (70-90 years) with first diagnosis of dementia (all-cause) during the index period (01/2010-12/2014) (Disease Analyzer, Germany), and controls without dementia matched (1:1) to cases on the basis of age, sex, type of health insurance, and physician. Practice visit records were used to verify that there had been 10 years of continuous follow-up prior to the index date. Multivariate logistic regression models were fitted with dementia as a dependent variable and the potential predictors.
Results: The mean age for the 11,956 cases and the 11,956 controls was 80.4 (SD: 5.3) years. 39.0% of them were male and 1.9% had private health insurance. In the multivariate regression model, the following variables were linked to a significant extent with an increased risk of dementia: diabetes (OR: 1.17; 95% CI: 1.10-1.24), lipid metabolism (1.07; 1.00-1.14), stroke incl. TIA (1.68; 1.57-1.80), Parkinson's disease (PD) (1.89; 1.64-2.19), intracranial injury (1.30; 1.00-1.70), coronary heart disease (1.06; 1.00-1.13), mild cognitive impairment (MCI) (2.12; 1.82-2.48), mental and behavioral disorders due to alcohol use (1.96; 1.50-2.57). The use of statins (OR: 0.94; 0.90-0.99), proton-pump inhibitors (PPI) (0.93; 0.90-0.97), and antihypertensive drugs (0.96, 0.94-0.99) were associated with a decreased risk of developing dementia.
Conclusions: Risk factors for dementia found in this study are consistent with the literature. Nevertheless, the associations between statin, PPI and antihypertensive drug use, and decreased risk of dementia need further investigations.
School shooters are often described as narcissistic, but empirical evidence is scant. To provide more reliable and detailed information, we conducted an exploratory study, analyzing police investigation files on seven school shootings in Germany, looking for symptoms of narcissistic personality disorder as defined by the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) in witnesses' and offenders' reports and expert psychological evaluations. Three out of four offenders who had been treated for mental disorders prior to the offenses displayed detached symptoms of narcissism, but none was diagnosed with narcissistic personality disorder. Of the other three, two displayed narcissistic traits. In one case, the number of symptoms would have justified a diagnosis of narcissistic personality disorder. Offenders showed low and high self-esteem and a range of other mental disorders. Thus, narcissism is not a common characteristic of school shooters, but possibly more frequent than in the general population. This should be considered in developing adequate preventive and intervention measures.
Several personality dispositions with common features capturing sensitivities to negative social cues have recently been introduced into psychological research. To date, however, little is known about their interrelations, their conjoint effects on behavior, or their interplay with other risk factors. We asked N = 349 adults from Germany to rate their justice, rejection, moral disgust, and provocation sensitivity, hostile attribution bias, trait anger, and forms and functions of aggression. The sensitivity measures were mostly positively correlated; particularly those with an egoistic focus, such as victim justice, rejection, and provocation sensitivity, hostile attributions and trait anger as well as those with an altruistic focus, such as observer justice, perpetrator justice, and moral disgust sensitivity. The sensitivity measures had independent and differential effects on forms and functions of aggression when considered simultaneously and when controlling for hostile attributions and anger. They could not be integrated into a single factor of interpersonal sensitivity or reduced to other well-known risk factors for aggression. The sensitivity measures, therefore, require consideration in predicting and preventing aggression.
Although there is ample evidence linking insecure attachment styles and intimate partner violence (IPV), little is known about the psychological processes underlying this association, especially from the victim’s perspective. The present study examined how attachment styles relate to the experience of sexual and psychological abuse, directly or indirectly through destructive conflict resolution strategies, both self-reported and attributed to their opposite-sex romantic partner. In an online survey, 216 Spanish undergraduates completed measures of adult attachment style, engagement and withdrawal conflict resolution styles shown by self and partner, and victimization by an intimate partner in the form of sexual coercion and psychological abuse. As predicted, anxious and avoidant attachment styles were directly related to both forms of victimization. Also, an indirect path from anxious attachment to IPV victimization was detected via destructive conflict resolution strategies. Specifically, anxiously attached participants reported a higher use of conflict engagement by themselves and by their partners. In addition, engagement reported by the self and perceived in the partner was linked to an increased probability of experiencing sexual coercion and psychological abuse. Avoidant attachment was linked to higher withdrawal in conflict situations, but the paths from withdrawal to perceived partner engagement, sexual coercion, and psychological abuse were non-significant. No gender differences in the associations were found. The discussion highlights the role of anxious attachment in understanding escalating patterns of destructive conflict resolution strategies, which may increase the vulnerability to IPV victimization.