Refine
Has Fulltext
- yes (195)
Year of publication
- 2015 (195) (remove)
Document Type
- Postprint (195) (remove)
Is part of the Bibliography
- yes (195) (remove)
Keywords
- interference (5)
- climate-change (4)
- embodied cognition (4)
- model (4)
- variability (4)
- New-Zealand (3)
- ancient DNA (3)
- carbon (3)
- evolution (3)
- eye movements (3)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (69)
- Humanwissenschaftliche Fakultät (35)
- Institut für Chemie (31)
- Institut für Biochemie und Biologie (12)
- Strukturbereich Kognitionswissenschaften (7)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (7)
- Department Linguistik (6)
- Department Psychologie (6)
- Institut für Physik und Astronomie (6)
- Historisches Institut (4)
- Institut für Geowissenschaften (4)
- Philosophische Fakultät (3)
- Department Sport- und Gesundheitswissenschaften (2)
- Institut für Ernährungswissenschaft (2)
- Potsdam Research Institute for Multilingualism (PRIM) (2)
- Department Erziehungswissenschaft (1)
- Institut für Mathematik (1)
- Institut für Romanistik (1)
- Institut für Slavistik (1)
- Universitätsbibliothek (1)
This article investigates a public debate in Germany that put a special spotlight on the interaction of standard language ideologies with social dichotomies, centering on the question of whether Kiezdeutsch, a new way of speaking in multilingual urban neighbourhoods, is a legitimate German dialect. Based on a corpus of emails and postings to media websites, I analyse central topoi in this debate and an underlying narrative on language and identity. Central elements of this narrative are claims of cultural elevation and cultural unity for an idealised standard language High German', a view of German dialects as part of a national folk culture, and the construction of an exclusive in-group of German' speakers who own this language and its dialects. The narrative provides a potent conceptual frame for the Othering of Kiezdeutsch and its speakers, and for the projection of social and sometimes racist deliminations onto the linguistic plane.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
A flexible approach to assess fluorescence decay functions in complex energy transfer systems
(2015)
Background: Time-correlated Forster resonance energy transfer (FRET) probes molecular distances with greater accuracy than intensity-based calculation of FRET efficiency and provides a powerful tool to study biomolecular structure and dynamics. Moreover, time-correlated photon count measurements bear additional information on the variety of donor surroundings allowing more detailed differentiation between distinct structural geometries which are typically inaccessible to general fitting solutions.
Results: Here we develop a new approach based on Monte Carlo simulations of time-correlated FRET events to estimate the time-correlated single photon counts (TCSPC) histograms in complex systems. This simulation solution assesses the full statistics of time-correlated photon counts and distance distributions of fluorescently labeled biomolecules. The simulations are consistent with the theoretical predictions of the dye behavior in FRET systems with defined dye distances and measurements of randomly distributed dye solutions. We validate the simulation results using a highly heterogeneous aggregation system and explore the conditions to use this tool in complex systems.
Conclusion: This approach is powerful in distinguishing distance distributions in a wide variety of experimental setups, thus providing a versatile tool to accurately distinguish between different structural assemblies in highly complex systems.
Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making.
Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop-and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation increases as one possible measure of agricultural intensification that could limit the expansion of cropland required in response to climate change and growing food demand. This example shows that current impact model uncertainties pose an important challenge to long-term mitigation planning and must not be ignored in long-term strategic decision making.
Background: Medical training is very demanding and associated with a high prevalence of psychological distress. Compared to the general population, medical students are at a greater risk of developing a psychological disorder. Various attempts of stress management training in medical school have achieved positive results on minimizing psychological distress; however, there are often limitations. Therefore, the use of a rigorous scientific method is needed. The present study protocol describes a randomized controlled trial to examine the effectiveness of a specifically developed mindfulness-based stress prevention training for medical students that includes selected elements of cognitive behavioral strategies (MediMind).
Methods/Design: This study protocol presents a prospective randomized controlled trial, involving four assessment time points: baseline, post-intervention, one-year follow-up and five-year follow-up. The aims include evaluating the effect on stress, coping, psychological morbidity and personality traits with validated measures. Participants are allocated randomly to one of three conditions: MediMind, Autogenic Training or control group. Eligible participants are medical or dental students in the second or eighth semester of a German university. They form a population of approximately 420 students in each academic term. A final total sample size of 126 (at five-year follow-up) is targeted. The trainings (MediMind and Autogenic Training) comprise five weekly sessions lasting 90 minutes each. MediMind will be offered to participants of the control group once the five-year follow-up is completed. The allotment is randomized with a stratified allocation ratio by course of studies, semester, and gender. After descriptive statistics have been evaluated, inferential statistical analysis will be carried out with a repeated measures ANOVA-design with interactions between time and group. Effect sizes will be calculated using partial η-square values.
Discussion: Potential limitations of this study are voluntary participation and the risk of attrition, especially concerning participants that are allocated to the control group. Strengths are the study design, namely random allocation, follow-up assessment, the use of control groups and inclusion of participants at different stages of medical training with the possibility of differential analysis.
A multi-reference study of the byproduct formation for a ring-closed dithienylethene photoswitch
(2015)
Photodriven molecular switches are sometimes hindered in their performance by forming byproducts which act as dead ends in sequences of switching cycles, leading to rapid fatigue effects. Understanding the reaction pathways to unwanted byproducts is a prerequisite for preventing them. This article presents a study of the photochemical reaction pathways for byproduct formation in the photochromic switch 1,2-bis-(3-thienyl)-ethene. Specifically, using single- and multi-reference methods the post-deexcitation reaction towards the byproduct in the electronic ground state S0 when starting from the S1–S0 conical intersection (CoIn), is considered in detail. We find an unusual low-energy pathway, which offers the possibility for the formation of a dyotropic byproduct. Several high-energy pathways can be excluded with high probability.
Fluid force microscopy combines the positional accuracy and force sensitivity of an atomic
force microscope (AFM) with nanofluidics via a microchanneled cantilever. However, adequate loading and cleaning procedures for such AFM micropipettes are required for various application situations. Here, a new frontloading procedure is described for an AFM micropipette functioning as a force- and pressure-controlled microscale liquid dispenser. This frontloading
procedure seems especially attractive when using target substances featuring high
costs or low available amounts. Here, the AFM micropipette could be filled from the tip side with liquid from a previously applied droplet with a volume of only a few μL using a short low-pressure pulse. The liquid-loaded AFM micropipettes could be then applied for experiments in air or liquid environments. AFM micropipette frontloading was evaluated with the well-known organic fluorescent dye rhodamine 6G and the AlexaFluor647-labeled antibody goat anti-rat IgG as an example of a larger biological compound. After micropipette usage, specific cleaning procedures were tested. Furthermore, a storage method is described, at which the AFM micropipettes could be stored for a few hours up to several days without drying out or clogging of the microchannel. In summary, the rapid, versatile and cost-efficient
frontloading and cleaning procedure for the repeated usage of a single AFM micropipette is beneficial for various application situations from specific surface modifications through to local manipulation of living cells, and provides a simplified and faster handling for already known experiments with fluid force microscopy.
Abstract gringo
(2015)
This paper defines the syntax and semantics of the input language of the ASP grounder gringo. The definition covers several constructs that were not discussed in earlier work on the semantics of that language, including intervals, pools, division of integers, aggregates with non-numeric values, and lparse-style aggregate expressions. The definition is abstract in the sense that it disregards some details related to representing programs by strings of ASCII characters. It serves as a specification for gringo from Version 4.5 on.
Inventories of individually delineated landslides are a key to understanding landslide physics and mitigating their impact. They permit assessment of area–frequency distributions and landslide volumes, and testing of statistical correlations between landslides and physical parameters such as topographic gradient or seismic strong motion. Amalgamation, i.e. the mapping of several adjacent landslides as a single polygon, can lead to potentially severe distortion of the statistics of these inventories. This problem can be especially severe in data sets produced by automated mapping. We present five inventories of earthquake-induced landslides mapped with different materials and techniques and affected by varying degrees of amalgamation. Errors on the total landslide volume and power-law exponent of the area–frequency distribution, resulting from amalgamation, may be up to 200 and 50%, respectively. We present an algorithm based on image and digital elevation model (DEM) analysis, for automatic identification of amalgamated polygons. On a set of about 2000 polygons larger than 1000 m2, tracing landslides triggered by the 1994 Northridge earthquake, the algorithm performs well, with only 2.7–3.6% incorrectly amalgamated landslides missed and 3.9–4.8% correct polygons incorrectly identified as amalgams. This algorithm can be used broadly to check landslide inventories and allow faster correction by automating the identification of amalgamation.
Sentences with doubly center-embedded relative clauses in which a verb phrase (VP) is missing are sometimes perceived as grammatical, thus giving rise to an illusion of grammaticality. In this paper, we provide a new account of why missing-VP sentences, which are both complex and ungrammatical, lead to an illusion of grammaticality, the so-called missing-VP effect. We propose that the missing-VP effect in particular, and processing difficulties with multiply center-embedded clauses more generally, are best understood as resulting from interference during cue-based retrieval. When processing a sentence with double center-embedding, a retrieval error due to interference can cause the verb of an embedded clause to be erroneously attached into a higher clause. This can lead to an illusion of grammaticality in the case of missing-VP sentences and to processing complexity in the case of complete sentences with double center-embedding. Evidence for an interference account of the missing-VP effect comes from experiments that have investigated the missing-VP effect in German using a speeded grammaticality judgments procedure. We review this evidence and then present two new experiments that show that the missing-VP effect can be found in German also with less restricting procedures. One experiment was a questionnaire study which required grammaticality judgments from participants without imposing any time constraints. The second experiment used a self-paced reading procedure and did not require any judgments. Both experiments confirm the prior findings of missing-VP effects in German and also show that the missing-VP effect is subject to a primacy effect as known from the memory literature. Based on this evidence, we argue that an account of missing-VP effects in terms of interference during cue-based retrieval is superior to accounts in terms of limited memory resources or in terms of experience with embedded structures.
New V-shaped non-centrosymmetric dyes, possessing a strongly electron-deficient azacyanine core, have been synthesized based on a straightforward two-step approach. The key step in this synthesis involves palladium-catalysed cross-coupling of dibromo-N,N′-methylene-2,2′-azapyridinocyanines with arylacetylenes. The resulting strongly polarized π-expanded heterocycles exhibit green to orange fluorescence and they strongly respond to changes in solvent polarity. We demonstrate that differently electron-donating peripheral groups have a significant influence on the internal charge transfer, hence on the solvent effect and fluorescence quantum yield. TD-DFT calculations confirm that, in contrast to the previously studied bis(styryl)azacyanines, the proximity of S1 and T2 states calculated for compounds bearing two 4-N,N-dimethylaminophenylethynyl moieties establishes good conditions for efficient intersystem crossing and is responsible for its low fluorescence quantum yield. Non-linear properties have also been determined for new azacyanines and the results show that depending on peripheral groups, the synthesized dyes exhibit small to large two-photon absorption cross sections reaching 4000 GM.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.
Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.
To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.
We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.
We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
NaYF4:Yb:Er nanoparticles (UCNP) were synthesized under mild experimental conditions to obtain a pure cubic lattice. Upon annealing at different temperatures up to Tan = 700 °C phase transitions to the hexagonal phase and back to the cubic phase were induced. The UCNP materials obtained for different Tan were characterized with respect to the lattice phase using standard XRD and Raman spectroscopy as well as steady state and time resolved upconversion luminescence. The standard techniques showed that for the annealing temperature range 300 °C < Tan < 600 °C the hexagonal lattice phase was dominant. For Tan < 300 °C hardly any change in the lattice phase could be deduced, whereas for Tan > 600 °C a back transfer to the α-phase was observed. Complementarily, the luminescence upconversion properties of the annealed UCNP materials were characterized in steady state and time resolved luminescence measurements. Distinct differences in the upconversion luminescence intensity, the spectral intensity distribution and the luminescence decay kinetics were found for the cubic and hexagonal lattice phases, respectively, corroborating the results of the standard analytical techniques used. In laser power dependent measurements of the upconversion luminescence intensity it was found that the green (G1, G2) and red (R) emission of Er3+ showed different effects of Tan on the number of required photons reflecting the differences in the population routes of different energy levels involved. Furthermore, the intensity ratio of Gfull/R is highly effected by the laser power only when the β-phase is present, whereas the G1/G2 intensity ratio is only slightly effected regardless of the crystal phase. Moreover, based on different upconversion luminescence kinetics characteristics of the cubic and hexagonal phase time-resolved area normalized emission spectra (TRANES) proved to be a very sensitive tool to monitor the phase transition between cubic and hexagonal phases. Based on the TRANES analysis it was possible to resolve the lattice phase transition in more detail for 200 °C < Tan < 300 °C, which was not possible with the standard techniques.
Background
Body image distortion is highly prevalent among overweight individuals. Whilst there is evidence that body-dissatisfied women and those suffering from disordered eating show a negative attentional bias towards their own unattractive body parts and others’ attractive body parts, little is known about visual attention patterns in the area of obesity and with respect to males. Since eating disorders and obesity share common features in terms of distorted body image and body dissatisfaction, the aim of this study was to examine whether overweight men and women show a similar attentional bias.
Methods/Design
We analyzed eye movements in 30 overweight individuals (18 females) and 28 normalweight individuals (16 females) with respect to the participants’ own pictures as well as gender-
and BMI-matched control pictures (front and back view). Additionally, we assessed body image and disordered eating using validated questionnaires.
Discussion
The overweight sample rated their own body as less attractive and showed a more disturbed body image. Contrary to our assumptions, they focused significantly longer on attractive
compared to unattractive regions of both their own and the control body. For one’s own body, this was more pronounced for women. A higher weight status and more frequent body checking predicted attentional bias towards attractive body parts. We found that overweight adults exhibit an unexpected and stable pattern of selective attention, with a distinctive focus on their own attractive body regions despite higher levels of body dissatisfaction. This positive attentional bias may either be an indicator of a more pronounced pattern of attentional avoidance or a self-enhancing strategy. Further research is warranted to clarify these results.
Are individual differences in reading speed related to extrafoveal visual acuity and crowding?
(2015)
Readers differ considerably in their speed of self-paced reading. One factor known to influence fixation durations in reading is the preprocessing of words in parafoveal vision. Here we investigated whether individual differences in reading speed or the amount of information extracted from upcoming words (the preview benefit) can be explained by basic differences in extrafoveal vision-i.e., the ability to recognize peripheral letters with or without the presence of flanking letters. Forty participants were given an adaptive test to determine their eccentricity thresholds for the identification of letters presented either in isolation (extrafoveal acuity) or flanked by other letters (crowded letter recognition). In a separate eye-tracking experiment, the same participants read lists of words from left to right, while the preview of the upcoming words was manipulated with the gaze-contingent moving window technique. Relationships between dependent measures were analyzed on the observational level and with linear mixed models. We obtained highly reliable estimates both for extrafoveal letter identification (acuity and crowding) and measures of reading speed (overall reading speed, size of preview benefit). Reading speed was higher in participants with larger uncrowded windows. However, the strength of this relationship was moderate and it was only observed if other sources of variance in reading speed (e.g., the occurrence of regressive saccades) were eliminated. Moreover, the size of the preview benefit-an important factor in normal reading-was larger in participants with better extrafoveal acuity. Together, these results indicate a significant albeit moderate contribution of extrafoveal vision to individual differences in reading speed.
aspeed
(2015)
Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances, and is known to depend strongly on algorithm parameters. This problem was addressed by means of a simple, yet effective approach using handmade, uniform, and unordered schedules of multiple solvers in ppfolio, which showed very impressive performance in the 2011 Satisfiability Testing (SAT) Competition. Inspired by this, we take advantage of the modeling and solving capacities of Answer Set Programming (ASP) to automatically determine more refined, that is, nonuniform and ordered solver schedules from the existing benchmarking data. We begin by formulating the determination of such schedules as multi-criteria optimization problems and provide corresponding ASP encodings. The resulting encodings are easily customizable for different settings, and the computation of optimum schedules can mostly be done in the blink of an eye, even when dealing with large runtime data sets stemming from many solvers on hundreds to thousands of instances. Also, the fact that our approach can be customized easily enabled us to swiftly adapt it to generate parallel schedules for multi-processor machines.
An observational measure of anger regulation in middle childhood was developed that facilitated the in situ assessment of five maladaptive regulation strategies in response to an anger-eliciting task. 599 children aged 6-10 years (M = 8.12, SD = 0.92) participated in the study. Construct validity of the measure was examined through correlations with parent- and self-reports of anger regulation and anger reactivity. Criterion validity was established through links with teacher-rated aggression and social rejection measured by parent-, teacher-, and self-reports. The observational measure correlated significantly with parent- and self-reports of anger reactivity, whereas it was unrelated to parent- and self-reports of anger regulation. It also made a unique contribution to predicting aggression and social rejection.
Background
Previous literature mainly introduced cognitive functions to explain performance decrements in dual-task walking, i.e., changes in dual-task locomotion are attributed to limited cognitive information processing capacities. In this study, we enlarge existing literature and investigate whether leg muscular capacity plays an additional role in children’s dual-task walking performance.
Methods
To this end, we had prepubescent children (mean age: 8.7 ± 0.5 years, age range: 7–9 years) walk in single task (ST) and while concurrently conducting an arithmetic subtraction task (DT). Additionally, leg lean tissue mass was assessed.
Results
Findings show that both, boys and girls, significantly decrease their gait velocity (f = 0.73), stride length (f = 0.62) and cadence (f = 0.68) and increase the variability thereof (f = 0.20-0.63) during DT compared to ST. Furthermore, stepwise regressions indicate that leg lean tissue mass is closely associated with step time and the variability thereof during DT (R2 = 0.44, p = 0.009). These associations between gait measures and leg lean tissue mass could not be observed for ST (R2 = 0.17, p = 0.19).
Conclusion
We were able to show a potential link between leg muscular capacities and DT walking performance in children. We interpret these findings as evidence that higher leg muscle mass in children may mitigate the impact of a cognitive interference task on DT walking performance by inducing enhanced gait stability.
The results of streamflow trend studies are often characterized by mostly insignificant trends and inexplicable spatial patterns. In our study region, Western Austria, this applies especially for trends of annually averaged runoff. However, analysing the altitudinal aspect, we found that there is a trend gradient from higher-altitude to lower-altitude stations, i.e. a pattern of mostly positive annual trends at higher stations and negative ones at lower stations. At midaltitudes, the trends are mostly insignificant. Here we hypothesize that the streamflow trends are caused by the following two main processes: on the one hand, melting glaciers produce excess runoff at higher-altitude watersheds. On the other hand, rising temperatures potentially alter hydrological conditions in terms of less snowfall, higher infiltration, enhanced evapotranspiration, etc., which in turn results in decreasing streamflow trends at lower-altitude watersheds. However, these patterns are masked at mid-altitudes because the resulting positive and negative trends balance each other. To support these hypotheses, we attempted to attribute the detected trends to specific causes. For this purpose, we analysed trends of filtered daily streamflow data, as the causes for these changes might be restricted to a smaller temporal scale than the annual one. This allowed for the explicit determination of the exact days of year (DOYs) when certain streamflow trends emerge, which were then linked with the corresponding DOYs of the trends and characteristic dates of other observed variables, e.g. the average DOY when temperature crosses the freezing point in spring. Based on these analyses, an empirical statistical model was derived that was able to simulate daily streamflow trends sufficiently well. Analyses of subdaily streamflow changes provided additional insights. Finally, the present study supports many modelling approaches in the literature which found out that the main drivers of alpine streamflow changes are increased glacial melt, earlier snowmelt and lower snow accumulation in wintertime.