Refine
Has Fulltext
- no (1667) (remove)
Year of publication
- 2019 (1667) (remove)
Document Type
- Article (1358)
- Other (145)
- Doctoral Thesis (77)
- Review (52)
- Monograph/Edited Volume (11)
- Part of a Book (11)
- Conference Proceeding (7)
- Habilitation Thesis (6)
Language
- English (1667) (remove)
Keywords
- climate change (9)
- diffusion (8)
- stars: evolution (7)
- stars: winds, outflows (7)
- Germany (6)
- methods: numerical (6)
- quasars: absorption lines (6)
- stars: massive (6)
- Climate change (5)
- Holocene (5)
Institute
- Institut für Biochemie und Biologie (291)
- Institut für Physik und Astronomie (281)
- Institut für Geowissenschaften (239)
- Institut für Chemie (149)
- Department Psychologie (102)
- Institut für Ernährungswissenschaft (73)
- Institut für Umweltwissenschaften und Geographie (63)
- Institut für Mathematik (53)
- Hasso-Plattner-Institut für Digital Engineering GmbH (51)
- Department Sport- und Gesundheitswissenschaften (50)
Aims:
Although risk scores to predict type 2 diabetes exist, cost-effectiveness of risk thresholds to target prevention interventions are unknown. We applied cost-effectiveness analysis to identify optimal thresholds of predicted risk to target a low-cost community-based intervention in the USA.
Methods:
We used a validated Markov-based type 2 diabetes simulation model to evaluate the lifetime cost-effectiveness of alternative thresholds of diabetes risk. Population characteristics for the model were obtained from NHANES 2001-2004 and incidence rates and performance of two noninvasive diabetes risk scores (German diabetes risk score, GDRS, and ARIC 2009 score) were determined in the ARIC and Cardiovascular Health Study (CHS). Incremental cost-effectiveness ratios (ICERs) were calculated for increasing risk score thresholds. Two scenarios were assumed: 1-stage (risk score only) and 2-stage (risk score plus fasting plasma glucose (FPG) test (threshold 100 mg/dl) in the high-risk group).
Results:
In ARIC and CHS combined, the area under the receiver operating characteristic curve for the GDRS and the ARIC 2009 score were 0.691 (0.677-0.704) and 0.720 (0.707-0.732), respectively. The optimal threshold of predicted diabetes risk (ICER < $50,000/QALY gained in case of intervention in those above the threshold) was 7% for the GDRS and 9% for the ARIC 2009 score. In the 2-stage scenario, ICERs for all cutoffs >= 5% were below $50,000/QALY gained.
Conclusions:
Intervening in those with >= 7% diabetes risk based on the GDRS or >= 9% on the ARIC 2009 score would be cost-effective. A risk score threshold >= 5% together with elevated FPG would also allow targeting interventions cost-effectively.
Due to their ability to capture attention, emotional stimuli tend to benefit from enhanced perceptual processing, which can be helpful when such stimuli are task-relevant but hindering when they are task-irrelevant. Altered emotion-attention interactions have been associated with symptoms of affective disturbances, and emerging research focuses on improving emotion-attention interactions to prevent or treat affective disorders. In line with the Human Affectome Project's emphasis on linguistic components, we also analyzed the language used to describe attention-related aspects of emotion, and highlighted terms related to domains such as conscious awareness, motivational effects of attention, social attention, and emotion regulation. These terms were discussed within a broader review of available evidence regarding the neural correlates of (1) Emotion-Attention Interactions in Perception, (2) Emotion-Attention Interactions in Learning and Memory, (3) Individual Differences in Emotion-Attention Interactions, and (4) Training and Interventions to Optimize Emotion-Attention Interactions. This comprehensive approach enabled an integrative overview of the current knowledge regarding the mechanisms of emotion-attention interactions at multiple levels of analysis, and identification of emerging directions for future investigations.
Do stereotypes strike twice?
(2019)
Stereotypes influence teachers' perception of and behaviour towards students, thus shaping students' learning opportunities. The present study investigated how 315 Australian pre-service teachers' stereotypes about giftedness and gender are related to their perception of students' intellectual ability, adjustment, and social-emotional ability, using an experimental vignette approach and controlling for social desirability in pre-service teachers' responses. Repeated-measures ANOVA showed that pre-service teachers associated giftedness with higher intellectual ability, but with less adjustment compared to average-ability students. Furthermore, pre-service teachers perceived male students as less socially and emotionally competent and less adjusted than female students. Additionally, pre-service teachers seemed to perceive female average-ability students' adjustment as most favourable compared to male average-ability students and gifted students. Findings point to discrepancies between actual characteristics of gifted female and male students and stereotypes in teachers' beliefs. Consequences of stereotyping and implications for teacher education are discussed.
This study addresses the question of whether and how growing up with more than one language shapes a child's language impairment. Our focus is on Specific Language Impairment (SLI) in bilingual (Turkish-German) children. We specifically investigated a range of phenomena related to the so-called CP (Complementizer Phrase) in German, the hierarchically highest layer of syntactic clause structure, which has been argued to be particularly affected in children with SLI. Spontaneous speech data were examined from bilingual children with SLI in comparison to two comparison groups: (i) typically-developing bilingual children, (ii) monolingual children with SLI. We found that despite persistent difficulty with subject-verb agreement, the two groups of children with SLI did not show any impairment of the CP-domain. We conclude that while subject-verb agreement is a suitable linguistic marker of SLI in German-speaking children, for both monolingual and bilingual ones, 'vulnerability of the CP-domain' is not.
Achromatium oxaliferum is a large sulfur bacterium easily recognized by large intracellular calcium carbonate bodies. Although these bodies often fill major parts of the cells' volume, their role and specific intracellular location are unclear. In this study, we used various microscopy and staining techniques to identify the cell compartment harboring the calcium carbonate bodies. We observed that Achromatium cells often lost their calcium carbonate bodies, either naturally or induced by treatments with diluted acids, ethanol, sodium bicarbonate and UV radiation which did not visibly affect the overall shape and motility of the cells (except for UV radiation). The water-soluble fluorescent dye fluorescein easily diffused into empty cavities remaining after calcium carbonate loss. Membranes (stained with Nile Red) formed a network stretching throughout the cell and surrounding empty or filled calcium carbonate cavities. The cytoplasm (stained with FITC and SYBR Green for nucleic acids) appeared highly condensed and showed spots of dissolved Ca2+ (stained with Fura-2). From our observations, we conclude that the calcium carbonate bodies are located in the periplasm, in extra-cytoplasmic pockets of the cytoplasmic membrane and are thus kept separate from the cell's cytoplasm. This periplasmic localization of the carbonate bodies might explain their dynamic formation and release upon environmental changes.
Lake sediments are increasingly explored as reliable paleoflood archives. In addition to established flood proxies including detrital layer thickness, chemical composition, and grain size, we explore stable oxygen and carbon isotope data as paleoflood proxies for lakes in catchments with carbonate bedrock geology. In a case study from Lake Mondsee (Austria), we integrate high-resolution sediment trapping at a proximal and a distal location and stable isotope analyses of varved lake sediments to investigate flood-triggered detrital sediment flux. First, we demonstrate a relation between runoff, detrital sediment flux, and isotope values in the sediment trap record covering the period 2011-2013 CE including 22 events with daily (hourly) peak runoff ranging from 10 (24) m(3) s(-1) to 79 (110) m(3) s(-1). The three- to ten-fold lower flood-triggered detrital sediment deposition in the distal trap is well reflected by attenuated peaks in the stable isotope values of trapped sediments. Next, we show that all nine flood-triggered detrital layers deposited in a sediment record from 1988 to 2013 have elevated isotope values compared with endogenic calcite. In addition, even two runoff events that did not cause the deposition of visible detrital layers are distinguished by higher isotope values. Empirical thresholds in the isotope data allow estimation of magnitudes of the majority of floods, although in some cases flood magnitudes are overestimated because local effects can result in too-high isotope values. Hence we present a proof of concept for stable isotopes as reliable tool for reconstructing flood frequency and, although with some limitations, even for flood magnitudes.
Experimental studies have reported on the anti-inflammatory properties of polyphenols. However, results from epidemiological investigations have been inconsistent and especially studies using biomarkers for assessment of polyphenol intake have been scant. We aimed to characterise the association between plasma concentrations of thirty-five polyphenol compounds and low-grade systemic inflammation state as measured by high-sensitivity C-reactive protein (hsCRP). A cross-sectional data analysis was performed based on 315 participants in the European Prospective Investigation into Cancer and Nutrition cohort with available measurements of plasma polyphenols and hsCRP. In logistic regression analysis, the OR and 95 % CI of elevated serum hsCRP (>3 mg/l) were calculated within quartiles and per standard deviation higher level of plasma polyphenol concentrations. In a multivariable-adjusted model, the sum of plasma concentrations of all polyphenols measured (per standard deviation) was associated with 29 (95 % CI 50, 1) % lower odds of elevated hsCRP. In the class of flavonoids, daidzein was inversely associated with elevated hsCRP (OR 0 center dot 66, 95 % CI 0 center dot 46, 0 center dot 96). Among phenolic acids, statistically significant associations were observed for 3,5-dihydroxyphenylpropionic acid (OR 0 center dot 58, 95 % CI 0 center dot 39, 0 center dot 86), 3,4-dihydroxyphenylpropionic acid (OR 0 center dot 63, 95 % CI 0 center dot 46, 0 center dot 87), ferulic acid (OR 0 center dot 65, 95 % CI 0 center dot 44, 0 center dot 96) and caffeic acid (OR 0 center dot 69, 95 % CI 0 center dot 51, 0 center dot 93). The odds of elevated hsCRP were significantly reduced for hydroxytyrosol (OR 0 center dot 67, 95 % CI 0 center dot 48, 0 center dot 93). The present study showed that polyphenol biomarkers are associated with lower odds of elevated hsCRP. Whether diet rich in bioactive polyphenol compounds could be an effective strategy to prevent or modulate deleterious health effects of inflammation should be addressed by further well-powered longitudinal studies.
Accelerating knowledge
(2019)
As knowledge-intensive processes are often carried out in teams and demand for knowledge transfers among various knowledge carriers, any optimization in regard to the acceleration of knowledge transfers obtains a great economic potential. Exemplified with product development projects, knowledge transfers focus on knowledge acquired in former situations and product generations. An adjustment in the manifestation of knowledge transfers in its concrete situation, here called intervention, therefore can directly be connected to the adequate speed optimization of knowledge-intensive process steps. This contribution presents the specification of seven concrete interventions following an intervention template. Further, it describes the design and results of a workshop with experts as a descriptive study. The workshop was used to assess the practical relevance of interventions designed as well as the identification of practical success factors and barriers of their implementation.
This study investigates the effect of different anticonsumption constructs on consumer wellbeing. The study assumes that people will only lower their level of consumption if doing so does not also lower personal wellbeing. More precisely, this research investigates how specific subtypes of sustainable anticonsumption (e.g., voluntary simplicity, collaborative consumption, and debt-free living) relate to different states of consumer's wellbeing (e.g., financial, psychosocial, and subjective wellbeing). This work also examines whether consumer empowerment can improve personal wellbeing and strengthen the anticonsumption wellbeing relationship. The results show that voluntarily foregoing consumption does not reduce wellbeing and consumer empowerment plays a significant role in supporting sustainable pathways to consumer wellbeing. This study reasons that empowerment improves consumer sovereignty, but may be detrimental for consumers heavily concerned about debt-free living. The present investigation concludes by proposing implications for public and consumer policymakers wishing to promote appropriate sustainable (anticonsumption) pathways to consumer wellbeing.
Experimental and kinetic modelling studies are presented to investigate the mechanism of 3,3 ',5,5 '-tetramethylbenzidine (TMB) oxidation by hydrogen peroxide (H2O2) catalyzed by peroxidase-like Pt nanoparticles immobilized in spherical polyelectrolyte brushes (SPB-Pt). Due to the high stability of SPB-Pt colloidal, this reaction can be monitored precisely in situ by UV/VIS spectroscopy. The time-dependent concentration of the blue-colored oxidation product of TMB expressed by different kinetic models was used to simulate the experimental data by a genetic fitting algorithm. After falsifying the models with abundant experimental data, it is found that both H2O2 and TMB adsorb on the surface of Pt nanoparticles to react, indicating that the reaction follows the Langmuir-Hinshelwood mechanism. A true rate constant k, characterizing the rate-determining step of the reaction and which is independent on the amount of catalysts used, is obtained for the first time. Furthermore, it is found that the product adsorbes strongly on the surface of nanoparticles, thus inhibiting the reaction. The entire analysis provides a new perspective to study the catalytic mechanism and evaluate the catalytic activity of the peroxidase-like nanoparticles.
Purpose
After therapy of cancer of the esophagus or the esophagogastric junction, patients often suffer from anxiety and depression. Some risk factors for elevated anxiety and depression are reported, but the influence of steatorrhea, the frequency of which has only recently been reported, has not yet been investigated.
Method
Using the Hospital Anxiety and Depression Scale (HADS), we analyzed the correlation of anxiety and depression with steatorrhea, appetite, and weight loss in 72 patients with cancer of the esophagus or of the esophagogastric junction, who were treated at our rehabilitation clinic between January 2011 and December 2014. In addition, effectiveness of psychological interviews was analyzed.
Results
We have evaluable anxiety questionnaires from 51 patients showing a median anxiety value of 5 (range 0-13). As for the depression, results from evaluable questionnaires of 54 patients also showed a median value of 5 (range 0-15). Increased anxiety and depression values (> 7) were observed in 25.4% and 37.0% of the patients respectively. Patients who were admitted with steatorrhea for rehabilitation showed a statistically higher anxiety value (median 6.3 vs. 4.7, p < 0.05), reduced appetite, and a weight loss above 15 kg depicting a correlation to anxiety and depression. Psychological conversations helped lowering the depression but had no influence on anxiety.
Conclusions
Impairments after cancer treatment, such as steatorrhea, appetite loss, and weight loss, should be interpreted as an alarm signal and should necessitate screening for increased anxiety and depression. Psychological therapy can help improving the extent of the depression.
Polyhydroxyalkanoates (PHAs) have attracted attention as degradable (co)polyesters which can be produced by microorganisms with variations in the side chain. This structural variation influences not only the thermomechanical properties of the material but also its degradation behavior. Here, we used Langmuir monolayers at the air-water (A-W) interface as suitable models for evaluating the abiotic degradation of two PHAs with different side-chain lengths and crystallinity. By controlling the polymer state (semi crystalline, amorphous), the packing density, the pH, and the degradation mechanism, we could draw several significant conclusions. (i) The maximum degree of crystallinity for a PHA film to be efficiently degraded up to pH = 12.3 is 40%. (ii) PHA made of repeating units with shorter side-chain length are more easily hydrolyzed under alkaline conditions. The efficiency of alkaline hydrolysis decreased by about 65% when the polymer was 40% crystalline. (iii) In PHA films with a relatively high initial crystallinity, abiotic degradation initiated a chemicrystallization phenomenon, detected as an increase in the storage modulus (E'). This could translate into an increase in brittleness and reduction in the material degradability. Finally, we demonstrate the stability of the measurement system for long-term experiments, which allows degradation conditions for polymers that could closely simulate real-time degradation.
Geysers are hot springs whose frequency of water eruptions remain poorly understood. We set up a local broadband seismic network for 1 year at Strokkur geyser, Iceland, and developed an unprecedented catalog of 73,466 eruptions. We detected 50,135 single eruptions but find that the geyser is also characterized by sets of up to six eruptions in quick succession. The number of single to sextuple eruptions exponentially decreased, while the mean waiting time after an eruption linearly increased (3.7 to 16.4 min). While secondary eruptions within double to sextuple eruptions have a smaller mean seismic amplitude, the amplitude of the first eruption is comparable for all eruption types. We statistically model the eruption frequency assuming discharges proportional to the eruption multiplicity and a constant probability for subsequent events within a multituple eruption. The waiting time after an eruption is predictable but not the type or amplitude of the next one. <br /> Plain Language Summary Geysers are springs that often erupt in hot water fountains. They erupt more often than volcanoes but are quite similar. Nevertheless, it is poorly understood how often volcanoes and also geysers erupt. We created a list of 73,466 eruption times of Strokkur geyser, Iceland, from 1 year of seismic data. The geyser erupted one to six times in quick succession. We found 50,135 single eruptions but only 1 sextuple eruption, while the mean waiting time increased from 3.7 min after single eruptions to 16.4 min after sextuple eruptions. Mean amplitudes of each eruption type were higher for single eruptions, but all first eruptions in a succession were similar in height. Assuming a constant heat inflow at depth, we can predict the waiting time after an eruption but not the type or amplitude of the next one.
Primary carbohydrate metabolism in plants includes several sugar and sugar-derivative transport processes. Over recent years, evidences have shown that in starch-related transport processes, in addition to glucose 6-phosphate, maltose, glucose and triose-phosphates, glucose 1-phosphate also plays a role and thereby increases the possible fluxes of sugar metabolites in planta. In this study, we report the characterization of two highly similar transporters, At1g34020 and At4g09810, in Arabidopsis thaliana, which allow the import of glucose 1-phosphate through the plasma membrane. Both transporters were expressed in yeast and were biochemically analyzed to reveal an antiport of glucose 1-phosphate/phosphate. Furthermore, we showed that the apoplast of Arabidopsis leaves contained glucose 1-phosphate and that the corresponding mutant of these transporters had higher glucose 1-phosphate amounts in the apoplast and alterations in starch and starch-related metabolism.
Poly(lactide-co-glycolide)s are commercially available degradable implant materials, which are typically selected based on specifications given by the manufacturer, one of which is their molecular weight. Here, we address the question whether variations in the chain length and their distribution affect the degradation behavior of Poly[(rac-lactide)-co-glycolide]s (PDLLGA). The hydrolysis was studied in ultrathin films at the air-water interface in order to rule out any morphological effects. We found that both for purely hydrolytic degradation as well as under enzymatic catalysis, the molecular weight has very little effect on the overall degradation kinetics of PDLLGAs. The quantitative analysis suggested a random scission mechanism. The monolayer experiments showed that an acidic micro-pH does not accelerate the degradation of PDLLGAs, in contrast to alkaline conditions. The degradation experiments were combined with interfacial rheology measurements, which showed a drastic decrease of the viscosity at little mass loss. The extrapolated molecular weight behaved similar to the viscosity, dropping to a value near to the solubility limit of PDLLGA oligomers before mass loss set in. This observation suggests a solubility controlled degradation of PDLLGA. Conclusively, the molecular weight affects the degradation of PDLLGA devices mostly in indirect ways, e.g. by determining their morphology and porosity during fabrication. Our study demonstrates the relevance of the presented Langmuir degradation method for the design of controlled release systems.
Obligate human pathogenic Neisseria gonorrhoeae are the second most frequent bacterial cause of sexually transmitted diseases. These bacteria invade different mucosal tissues and occasionally disseminate into the bloodstream. Invasion into epithelial cells requires the activation of host cell receptors by the formation of ceramide-rich platforms. Here, we investigated the role of sphingosine in the invasion and intracellular survival of gonococci. Sphingosine exhibited an anti-gonococcal activity in vitro. We used specific sphingosine analogs and click chemistry to visualize sphingosine in infected cells. Sphingosine localized to the membrane of intracellular gonococci. Inhibitor studies and the application of a sphingosine derivative indicated that increased sphingosine levels reduced the intracellular survival of gonococci. We demonstrate here, that sphingosine can target intracellular bacteria and may therefore exert a direct bactericidal effect inside cells.
The role of perceived need support from exercise professionals in improving mental health was examined in a sample of older adults, thereby validating the short Health Care Climate Questionnaire. A total of 491 older people (M = 72.68 years; SD = 5.47) attending a health exercise program participated in this study. Cronbach's alpha was found to be high (alpha = .90). Satisfaction with the exercise professional correlated moderately with the short Health Care Climate Questionnaire mean value (r = .38; p < .01). The mediator analyses yielded support for the self-determination theory process model in older adults by showing both basic need satisfaction and frustration as mediating variables between perceived autonomy support and depressive symptoms. The short Health Care Climate Questionnaire is an economical instrument for assessing basic need satisfaction provided by the exercise therapist from the participant's perspective. Furthermore, this cross-sectional study supported the link from coaching style to the satisfaction/frustration of basic psychological needs, which in turn, predicted mental health. Analyses of criterion validity suggest a revision of the construct by integrating need frustration.
Many human infants grow up learning more than one language simultaneously but only recently has research started to study early language acquisition in this population more systematically. The paper gives an overview on findings on early language acquisition in bilingual infants during the first two years of life and compares these findings to current knowledge on early language acquisition in monolingual infants. Given the state of the research, the overview focuses on research on phonological and early lexical development in the first two years of life. We will show that the developmental trajectory of early language acquisition in these areas is very similar in mono- and bilingual infants suggesting that these early steps into language are guided by mechanisms that are rather robust against the differences in the conditions of language exposure that mono- and bilingual infants typically experience.
Data assimilation aims to blend incomplete and inaccurate data with physics-based dynamical models. In the Earth's radiation belts, it is used to reconstruct electron phase space density, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment. In this study, we perform reanalysis of the sparse measurements from four spacecraft using the three-dimensional Versatile Electron Radiation Belt diffusion model and a split-operator Kalman filter over a 6-month period from 1 October 2012 to 1 April 2013. In comparison to previous works, our 3-D model accounts for more physical processes, namely, mixed pitch angle-energy diffusion, scattering by Electromagnetic Ion Cyclotron waves, and magnetopause shadowing. We describe how data assimilation, by means of the innovation vector, can be used to account for missing physics in the model. We use this method to identify the radial distances from the Earth and the geomagnetic conditions where our model is inconsistent with the measured phase space density for different values of the invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and we interpret this as evidence of where and when additional source or loss processes are active. The current work demonstrates that 3-D data assimilation provides a comprehensive picture of the radiation belt electrons and is a crucial step toward performing reanalysis using measurements from ongoing and future missions.
Predator-prey cycles rank among the most fundamental concepts in ecology, are predicted by the simplest ecological models and enable, theoretically, the indefinite persistence of predator and prey(1-4). However, it remains an open question for how long cyclic dynamics can be self-sustained in real communities. Field observations have been restricted to a few cycle periods(5-8) and experimental studies indicate that oscillations may be short-lived without external stabilizing factors(9-19). Here we performed microcosm experiments with a planktonic predator-prey system and repeatedly observed oscillatory time series of unprecedented length that persisted for up to around 50 cycles or approximately 300 predator generations. The dominant type of dynamics was characterized by regular, coherent oscillations with a nearly constant predator-prey phase difference. Despite constant experimental conditions, we also observed shorter episodes of irregular, non-coherent oscillations without any significant phase relationship. However, the predator-prey system showed a strong tendency to return to the dominant dynamical regime with a defined phase relationship. A mathematical model suggests that stochasticity is probably responsible for the reversible shift from coherent to non-coherent oscillations, a notion that was supported by experiments with external forcing by pulsed nutrient supply. Our findings empirically demonstrate the potential for infinite persistence of predator and prey populations in a cyclic dynamic regime that shows resilience in the presence of stochastic events.
We elaborate on the possibilities and needs to integrate design thinking into requirements engineering, drawing from our research and project experiences. We suggest three approaches for tailoring and integrating design thinking and requirements engineering with complementary synergies and point at open challenges for research and practice.
Three poly(tetrafluoroethylene-hexafluoropropylene-vinylidenefluoride) (TFE-HFP-VDF or THV) terpolymers (Dyneon (R)) with different monomer ratios are investigated to demonstrate the concept of "modified" PTFE for space-charge electrets. HFP and VDF monomers distort the highly ordered PTFE molecules, which effectively enhances processability and adversely affects space-charge storage. Particularly, VDF component renders the material polar and probably also more conductive, partially undermining the space-charge-storage capabilities of PTFE. Nevertheless, the terpolymer THV815 with a TFE/HFP/VDF wt% ratio of 76.1/10.9/13 combines easy processability and relatively good space-charge stability. Our results shed light on novel concepts for space-charge electret materials with enhanced processing properties and reasonable charge-storage capabilities.
A second peak in the extreme ultraviolet sometimes appears during the gradual phase of solar flares, which is known as the EUV late phase (ELP). Stereotypically ELP is associated with two separated sets of flaring loops with distinct sizes, and it has been debated whether ELP is caused by additional heating or extended plasma cooling in the longer loop system. Here we carry out a survey of 55 M-and-above GOES-class flares with ELP during 2010-2014. Based on the flare-ribbon morphology, these flares are categorized as circular-ribbon (19 events), two-ribbon (23 events), and complex-ribbon (13 events) flares. Among them, 22 events (40%) are associated with coronal mass ejections, while the rest are confined. An extreme ELP, with the late-phase peak exceeding the main-phase peak, is found in 48% of two-ribbon flares, 37% of circular-ribbon flares, and 31% of complex-ribbon flares, suggesting that additional heating is more likely present during ELP in two-ribbon than in circular-ribbon flares. Overall, cooling may be the dominant factor causing the delay of the ELP peak relative to the main-phase peak, because the loop system responsible for the ELP emission is generally larger than, and well separated from, that responsible for the main-phase emission. All but one of the circular-ribbon flares can be well explained by a composite "dome-plate" quasi-separatrix layer (QSL). Only half of these show a magnetic null point, with its fan and spine embedded in the dome and plate, respectively. The dome-plate QSL, therefore, is a general and robust structure characterizing circular-ribbon flares.
Self-propelled rods
(2019)
A wide range of experimental systems including gliding, swarming and swimming bacteria, in vitro motility assays, and shaken granular media are commonly described as self-propelled rods. Large ensembles of those entities display a large variety of self-organized, collective phenomena, including the formation of moving polar clusters, polar and nematic dynamic bands, mobility-induced phase separation, topological defects, and mesoscale turbulence, among others. Here, we give a brief survey of experimental observations and review the theoretical description of self-propelled rods. Our focus is on the emergent pattern formation of ensembles of dry self-propelled rods governed by short-ranged, contact mediated interactions and their wet counterparts that are also subject to long-ranged hydrodynamic flows. Altogether, self-propelled rods provide an overarching theme covering many aspects of active matter containing well-explored limiting cases. Their collective behavior not only bridges the well-studied regimes of polar selfpropelled particles and active nematics, and includes active phase separation, but also reveals a rich variety of new patterns.
The German start-up subsidy (SUS) program for the unemployed has recently undergone a major makeover, altering its institutional setup, adding an additional layer of selection and leading to ambiguous predictions of the program's effectiveness. Using propensity score matching (PSM) as our main empirical approach, we provide estimates of long-term effects of the post-reform subsidy on individual employment prospects and labor market earnings up to 40 months after entering the program. Our results suggest large and persistent long-term effects of the subsidy on employment probabilities and net earned income. These effects are larger than what was estimated for the pre-reform program. Extensive sensitivity analyses within the standard PSM framework reveal that the results are robust to different choices regarding the implementation of the weighting procedure and also with respect to deviations from the conditional independence assumption. As a further assessment of the results' sensitivity, we go beyond the standard selection-on-observables approach and employ an instrumental variable setup using regional variation in the likelihood of receiving treatment. Here, we exploit the fact that the reform increased the discretionary power of local employment agencies in allocating active labor market policy funds, allowing us to obtain a measure of local preferences for SUS as the program of choice. The results based on this approach give rise to similar estimates. Thus, our results indicating that SUS are still an effective active labor market program after the reform do not appear to be driven by "hidden bias."
The numerical prediction of radiative transport is a challenging task due to the complexity of the radiative transport equation. We apply the lattice Boltzmann method (LBM), originally developed for fluid flow problems, to solve the radiative transport in volume. One model (meso RTLBM) is derived directly from a discretization of the radiative transport equation, yielding in a precise but numerical costly scheme. The second model (macro RTLBM) solves the Helmholtz equation, which is a proper approximation for highly scattering volumes. Both numerical algorithms are validated against Monte-Carlo data for a set of 35 optical parameters, which correspond to radiative transport ranging from ballistic to diffuse regimes. Together with a set of four benchmark simulations, the comprehensive validation concludes the overall quality and detects asymptotic trends for radiative transport LBM. Furthermore, an accuracy map is presented, which summarizes the error for all parameters. This graph allows to determine the validity range for both radiative transport LBM at a glance. Finally, comprehensive guidelines are formulated to facilitate the choice of the radiative transport LBM model.
In this paper Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular (and some singular) SLPs of even orders (tested for up to eight), with a mix of (including non-separable and finite singular endpoints) boundary conditions, accurately and efficiently. The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved. The inverse SLP algorithm proposed by Barcilon (1974) is utilized in combination with the Magnus method so that a direct SLP of any (even) order and an inverse SLP of order two can be solved effectively.
In this paper, we investigate the continuous version of modified iterative Runge–Kutta-type methods for nonlinear inverse ill-posed problems proposed in a previous work. The convergence analysis is proved under the tangential cone condition, a modified discrepancy principle, i.e., the stopping time T is a solution of ∥𝐹(𝑥𝛿(𝑇))−𝑦𝛿∥=𝜏𝛿+ for some 𝛿+>𝛿, and an appropriate source condition. We yield the optimal rate of convergence.
This study presents the first suite of apatite fission-track (AFT) ages from the SE part of the Western Sudetes. AFT cooling ages from the Orlica-snie(z) over dotnik Dome and the Upper Nysa Klodzka Graben range from Late Cretaceous (84 Ma) to Early Palaeocene-Middle Eocene (64-45 Ma). The first stage of basin evolution (similar to 100-90 Ma) was marked by the formation of a local extensional depocentre and disruption of the Mesozoic planation surface. Subsequent far-field convergence of European microplates resulted in Coniacian-Santonian (similar to 89-83 Ma) thrust faulting. AFT data from both metamorphic basement and Mesozoic sedimentary cover indicate homogenous Late Cretaceous burial of the entire Western Sudetes. Thermal history modeling suggests that the onset of cooling could be constrained between 89 and 63 Ma with a climax during the Palaeocene-Middle Eocene basin inversion phase.
Bank filtration (BF) is an established indirect water-treatment technology. The quality of water gained via BF depends on the subsurface capture zone, the mixing ratio (river water versus ambient groundwater), spatial and temporal distribution of subsurface travel times, and subsurface temperature patterns. Surface-water infiltration into the adjacent aquifer is determined by the local hydraulic gradient and riverbed permeability, which could be altered by natural clogging, scouring and artificial decolmation processes. The seasonal behaviour of a BF system in Germany, and its development during and about 6 months after decolmation (canal reconstruction), was observed with a long-term monitoring programme. To quantify the spatial and temporal variation in the BF system, a transient flow and heat transport model was implemented and two model scenarios, 'with' and 'without' canal reconstruction, were generated. Overall, the simulated water heads and temperatures matched those observed. Increased hydraulic connection between the canal and aquifer caused by the canal reconstruction led to an increase of similar to 23% in the already high share of BF water abstracted by the nearby waterworks. Subsurface travel-time distribution substantially shifted towards shorter travel times. Flow paths with travel times <200 days increased by similar to 10% and those with <300 days by 15%. Generally, the periodic temperature signal, and the summer and winter temperature extrema, increased and penetrated deeper into the aquifer. The joint hydrological and thermal effects caused by the canal reconstruction might increase the potential of biodegradable compounds to further penetrate into the aquifer, also by potentially affecting the redox zonation in the aquifer.
When does life end?
(2019)
If you look at the question of the end-of-life legislation, one – or rather THE basic question – is particularly interesting: What is the "end of life"? What is death? Ofcourse, one can approach this question theologically or philosophically, but alsolegally and especially medically. Since the 1960 s, medical progress has made itpossible to distinguish between different individual points of time within the na-tural dying process. However, this raises the question as to which of these pointsof time is relevant for criminal law. This question, which is usually onsideredvery emotionally, will be examined in more detail in the paper.
Pride is linked to conviviality, to the practice of life-with-an-other, and to an awareness of the limitations of the life forms and life norms which guide and regulate the life of culturally, socially, and historically defined communities. Assuming this link, pride in living-together and conviviality appear as concepts creating a framework for future perspectives. But these concepts need a space in which they can unfold critically and confidently with a view to the future. For millennia, the literatures of the world have created this space of simulation and experimentation in which knowledge of how-to-live-with-an-other has been put down on paper through the open-ended tradition of writing. It is the space of the life forms and life norms of conviviality: it offers us prospective knowledge for the future by translating the imaginable into the thinkable, and the readable into the livable.
Assessments of psychotherapeutic competencies play a crucial role in research and training. However, research on the reliability and validity of such assessments is sparse. This study aimed to provide an overview of the current evidence and to provide an average interrater reliability (IRR) of psychotherapeutic competence ratings. A systematic review was conducted, and 20 studies reported in 32 publications were collected. These 20 studies were included in a narrative synthesis, and 20 coefficients were entered into the meta-analysis. Most primary studies referred to cognitive-behavioral therapies and the treatment of depression, used the Cognitive Therapy Scale, based ratings on videos, and trained the raters. Our meta-analysis revealed a pooled ICC of 0.82, but at the same time severe heterogeneity. The evidence map highlighted a variety of variables related to competence assessments. Further aspects influencing the reliability of competence ratings and regarding the considerable heterogeneity are discussed in detail throughout the manuscript.
Education
(2019)
Vives emphasizes needlework as an appropriate occupation for all women, even for ‘a princess or a queen’. A wide variety of schools run by individual tradesmen or women offered instruction in certain fields, such as writing and calculus, while schools erected or licensed by the authorities concentrated on religious education. A large group of orphanages founded during the sixteenth and early seventeenth centuries provided a sound education for boys and girls. Authorities, parents and educational thinkers of the time were much less concerned with girls’ education than with that of boys. Private tutoring at home concentrated on the same subjects but, when boys were instructed at home, some girls had a chance to participate in a more academically oriented education. In most educational settings, be it at day schools, boarding schools or in private homes, teachers, mothers and governesses were expected to raise good housewives, pious mothers and obedient spouses.
Concepts and theory
(2019)
There is no threat to Western democracies today comparable to the rise of right-wing populism. While it has played an increasing role at least since the 1990s, only the social consequences of the global financial crises in 2008 have given it its break that led to UK’s ‘Brexit’ and the election of Donald Trump as US President in 2016, as well as promoting what has been called left populism in countries that were hit the hardest by both the banking crisis and consequential neo-liberal austerity politics in the EU, such as Greece and Portugal.
In 2017, the French Front National (FN) attracted many voters in the French Presidential elections; we have seen the radicalization of the Alternative für Deutschland (AfD) in Germany and the formation of centre-right government in Austria. Further, we have witnessed the consolidation of autocratic regimes, as in the EU member states Poland and Greece. All these manifestations of right-wing populism share a common feature: they attack or even compromise the core elements of democratic societies such as the separation of powers, protection of minorities, or the rule of law.
Despite a broad debate on the re-emergence of ‘populism’ in the transition from the twentieth to the twenty-first century that has brought forth many interesting findings, a lack of sociological reasoning cannot be denied, as sociology itself withdrew from theorising populism decades ago and largely left the field to political sciences and history. In a sense, Populism and the Crisis of Democracy considers itself a contribution to begin filling this lacuna. Written in a direct and clear style, this set of volumes will be an invaluable reference for students and scholars in the field of political theory, political sociology and European Studies.
This volume Concepts and Theory offers new and fresh perspectives on the debate on populism. Starting from complaints about the problems of conceptualising populism that in recent years have begun to revolve around themselves, the chapters offer a fundamental critique of the term and concept of populism, theoretically inspired typologies and descriptions of currently dominant concepts, and ways to elaborate on them. With regard to theory, the volume offers approaches that exceed the disciplinary horizon of political science that so far has dominated the debate. As sociological theory so far has been more or less absent in the debate on populism, only few efforts have been made to discuss populism more intensely within different theoretical contexts in order to explain its dynamics and processes. Thus, this volume offers critical views on the debate on populism from the perspectives of political economy and the analysis of critical historical events, the links of analyses of populism with social movement mobilisation, the significance of ‘superfluous populations’ in the rise of populism and an analysis of the exclusionary character of populism from the perspective of the theory of social closure.
Leben in der ehemaligen DDR
(2019)
This article draws on the experience from an ongoing research project employing respondent-driven sampling (RDS) to survey (illicit) 24-hour home care workers. We highlight issues around the preparatory work and the fielding of the survey to provide researchers with useful insights on how to implement RDS when surveying populations for which the method has not yet been used. We conclude the article with ethical considerations that occur when employing RDS.
This paper compares the usability of data stemming from probability sampling with data stemming from nonprobability sampling. It develops six research scenarios that differ in their research goals and assumptions about the data generating process. It is shown that inferences from data stemming from nonprobability sampling implies demanding assumptions on the homogeneity of the units being studied. Researchers who are not willing to pose these assumptions are generally better off using data from probability sampling, regardless of the amount of nonresponse. However, even in cases when data from probability sampling is clearly advertised, data stemming from nonprobability sampling may contribute to the cumulative scientific endeavour of pinpointing a plausible interval for the parameter of interest.
Classical Wolf-Rayet (cWR) stars are at a crucial evolutionary stage for constraining the fates of massive stars. The feedback of these hot, hydrogen-depleted stars dominates their surrounding by tremendous injections of ionizing radiation and kinetic energy. The strength of a Wolf-Rayet (WR) wind decides the eventual mass of its remnant, likely a massive black hole. However, despite their major influence and importance for gravitational wave detection statistics, WR winds are particularly poorly understood. In this paper, we introduce the first set of hydrodynamically consistent stellar atmosphere models for cWR stars of both the carbon (C) and the nitrogen (N) sequence, i.e. WC and WN stars, as a function of stellar luminosity-to-mass ratio (or Eddington Gamma) and metallicity. We demonstrate the inapplicability of the CAK wind theory for cWR stars and confirm earlier findings that their winds are launched at the (hot) iron (Fe) opacity peak. For log Z/Z(circle dot) > -2, Fe is also the main accelerator throughout the wind. Contrasting previous claims of a sharp lower mass-loss limit forWR stars, we obtain a smooth transition to optically thin winds. Furthermore, we find a strong dependence of the mass-loss rates on Eddington Gamma, both at solar and subsolar metallicity. Increases inWCcarbon and oxygen abundances turn out to slightly reduce the predicted mass-loss rates. Calculations at subsolar metallicities indicate that below the metallicity of the Small Magellanic Cloud, WR mass-loss rates decrease much faster than previously assumed, potentially allowing for high black hole masses even in the local Universe.
The ability to reflect is considered an essential element of Education for Sustainable Development (ESD) and a key competence for learners and educators in ESD (UNECE Strategy for ESD, 2012). In contrast to its high importance, little is known about how reflective thinking can be identified, influenced or increased in the classroom. Therefore, the objective of this study is to address this need by developing an empirical multi-stage model designed to help educators diagnose different levels of reflective thinking and to identify factors that influence students’ reflective thinking about sustainability. Based on a 4–8-week project with grade 10 and 11 students studying sustainability, reflective thinking performance using weblogs as reflective journals was analysed. In addition, qualitative semi-structured interviews were conducted with the teachers to comprehend the learning environment and the personal value they assigned to ESD in their geography class. To determine the levels of reflective thinking achieved by the students, the study built on the work of Dewey (1933) and pre-existing multi-stage models of reflective thinking (Bain, Ballantyne, & Packer, 1999; Chen, Wei, Wu, & Uden, 2009). Using a qualitative, iterative data analysis, the study adapted the stage models to be applicable in ESD and found great differences in the students’ reflection levels. Furthermore, the study identified eight factors that influence students’ reflective thinking about sustainability. The outcomes of this study may be valuable for educators in high school and higher education, who seek to diagnose their students’ reflective thinking performance and facilitate reflection about sustainability.
We examine how and under what conditions informal institutional constraints, such as precedent and doctrine, are likely to affect collective choice within international organisations even in the absence of powerful bureaucratic agents. With a particular focus on the United Nations Security Council, we first develop a theoretical account of why such informal constraints might affect collective decisions even of powerful and strategically behaving actors. We show that precedents provide focal points that allow adopting collective decisions in coordination situations despite diverging preferences. Reliance on previous cases creates tacitly evolving doctrine that may develop incrementally. Council decision-making is also likely to be facilitated by an institutional logic of escalation driven by institutional constraints following from the typically staged response to crisis situations. We explore the usefulness of our theoretical argument with evidence from the Council doctrine on terrorism that has evolved since 1985. The key decisions studied include the 1992 sanctions resolution against Libya and the 2001 Council response to the 9/11 attacks. We conclude that, even within intergovernmentally structured international organisations, member states do not operate on a clean slate, but in a highly institutionalised environment that shapes their opportunities for action.
Professional development on fostering students’ academic language proficiency across the curriculum
(2019)
This meta-analysis aggregates effects from 10 studies evaluating professional development interventions aimed at qualifying in-service teachers to support their students in mastering academic language skills while teaching their respective subject areas. The analysis of a subset of studies revealed a small non-significant weighted training effect on teachers' cognition (g' = 0.21, SE = 0.14). An effect aggregation including all studies (with 650 teachers) revealed a medium to large weighted overall effect on teachers' classroom practices (g' = 0.71, SE = 0.16). Methodological variables moderated the effect magnitude. Nevertheless, the results suggest professional development is beneficial for improving teachers' practice.
This chapter aims to analyse whether and how democracy is actually threatened by big-data-based operations and what role international law can play to respond to this possible threat. It shows how big-data-based operations challenge democracy and how international law can help in defending it. The chapter focuses on both state and non-state actors may undermine democracy through big data operations; although democracy as such is a rather underdeveloped concept in international law, which is often more concerned with effectivity than legitimacy – international law protects against these challenges via a democracy-based approach rooted in international human rights law on the one hand, and the principle of non-intervention on the other hand. Thus, although democracy does not play a major role in international law, international law nevertheless is able to protect democracy against challenges from the inside as well as outside.
Duplicate detection algorithms produce clusters of database records, each cluster representing a single real-world entity. As most of these algorithms use pairwise comparisons, the resulting (transitive) clusters can be inconsistent: Not all records within a cluster are sufficiently similar to be classified as duplicate. Thus, one of many subsequent clustering algorithms can further improve the result. <br /> We explain in detail, compare, and evaluate many of these algorithms and introduce three new clustering algorithms in the specific context of duplicate detection. Two of our three new algorithms use the structure of the input graph to create consistent clusters. Our third algorithm, and many other clustering algorithms, focus on the edge weights, instead. For evaluation, in contrast to related work, we experiment on true real-world datasets, and in addition examine in great detail various pair-selection strategies used in practice. While no overall winner emerges, we are able to identify best approaches for different situations. In scenarios with larger clusters, our proposed algorithm, Extended Maximum Clique Clustering (EMCC), and Markov Clustering show the best results. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment (N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
Selfish Network Creation focuses on modeling real world networks from a game-theoretic point of view. One of the classic models by Fabrikant et al. (2003) is the network creation game, where agents correspond to nodes in a network which buy incident edges for the price of alpha per edge to minimize their total distance to all other nodes. The model is well-studied but still has intriguing open problems. The most famous conjectures state that the price of anarchy is constant for all alpha and that for alpha >= n all equilibrium networks are trees. We introduce a novel technique for analyzing stable networks for high edge-price alpha and employ it to improve on the best known bound for the latter conjecture. In particular we show that for alpha > 4n - 13 all equilibrium networks must be trees, which implies a constant price of anarchy for this range of alpha. Moreover, we also improve the constant upper bound on the price of anarchy for equilibrium trees.
Based upon the current debate on international practices with its focus on taken-for-granted everyday practices, we examine how Security Council practices may affect member state action and collective decisions on intrastate conflicts. We outline a concept that integrates the structuring effect of practices and their emergence from interaction among reflective actors. It promises to overcome the unresolved tension between understanding practices as a social regularity and as a fluid entity. We analyse the constitutive mechanisms of two Council practices that affect collective decisions on intrastate conflicts and elucidate how even reflective Council members become enmeshed with the constraining implications of evolving practices and their normative implications. (1) Previous Council decisions create precedent pressure and give rise to a virtually uncontested permissive Council practice that defines the purview for intervention into such conflicts. (2) A ratcheting practice forces opponents to choose between accepting steadily reinforced Council action, as occurred regarding Sudan/Darfur, and outright blockade, as in the case of Syria. We conclude that practices constitute a source of influence that is not captured by the traditional perspectives on Council activities as the consequence of geopolitical interests or of externally evolving international norms like the ‘responsibility to protect’ (R2P).
Little is known about how far-reaching decisions in UN Security Council sanctions committees are made. Developing a novel committee governance concept and using examples drawn from sanctions imposed on Iraq, Al-Qaida, Congo, Sudan and Iran, this book shows that Council members tend to follow the will of the powerful, whereas sanctions committee members often decide according to the rules. This is surprising since both Council and committees are staffed by the same member states.
Offering a fascinating account of Security Council micro-politics and decision-making processes on sanctions, this rigorous comparative and theory-driven analysis treats the Council and its sanctions committees as distinguishable entities that may differ in decision practice despite having the same members. Drawing extensively on primary documents, diplomatic cables, well-informed press coverage, reports by close observers and extensive interviews with committee members, Council diplomats and sanctions experts, it contrasts with the conventional wisdom on decision-making within these bodies, which suggests that the powerful permanent members would not accept rule-based decisions against their interests.
This book will be of interest to policy practitioners and scholars working in the broad field of international organizations and international relations theory as well as those specializing in sanctions, international law, the Security Council and counter-terrorism.
From the international perspective, the peace process in Liberia has generally been described as a successful model for international peacebuilding interventions. But how do Liberians perceive the peace process in their country? The aim of this paper is to complement an institutionalist approach looking at the security and justice mechanism in Liberia with some insights into local perceptions in order to answer the following question: how do Liberians perceive the peace process in their country and which institutions have been supportive for the establishment of sustaining peace? After briefly introducing the background of the Liberian conflict and the data collection, I present first results, analyzing the mechanism linking two peacebuilding institutions (peacekeeping and transitional justice) with the establishment of sustaining peace in Liberia.
Sustained glacier melt in the Himalayas has gradually spawned more than 5,000 glacier lakes that are dammed by potentially unstable moraines. When such dams break, glacier lake outburst floods (GLOFs) can cause catastrophic societal and geomorphic impacts. We present a robust probabilistic estimate of average GLOFs return periods in the Himalayan region, drawing on 5.4 billion simulations. We find that the 100-y outburst flood has an average volume of 33.5(+3.7)/(-3.7) x 10(6) m(3) (posterior mean and 95% highest density interval [HDI]) with a peak discharge of 15,600(+2.000)/(-1,800) m(3).S-1. Our estimated GLOF hazard is tied to the rate of historic lake outbursts and the number of present lakes, which both are highest in the Eastern Himalayas. There, the estimated 100-y GLOF discharge (similar to 14,500 m(3).s(-1)) is more than 3 times that of the adjacent Nyainqentanglha Mountains, and at least an order of magnitude higher than in the Hindu Kush, Karakoram, and Western Himalayas. The GLOF hazard may increase in these regions that currently have large glaciers, but few lakes, if future projected ice loss generates more unstable moraine-dammed lakes than we recognize today. Flood peaks from GLOFs mostly attenuate within Himalayan headwaters, but can rival monsoon-fed discharges in major rivers hundreds to thousands of kilometers downstream. Projections of future hazard from meteorological floods need to account for the extreme runoffs during lake outbursts, given the increasing trends in population, infrastructure, and hydropower projects in Himalayan headwaters.
Previous research has identified students' personality traits, especially conscientiousness, as highly relevant predictors of academic success. Less is known about the role of Big Five personality traits in students when it comes to teachers' decisions about students' educational trajectories and whether personality traits differentially affect these decisions by teachers in different grade levels. This study examines to what extent students' Big Five personality traits affect teacher decisions on grade retention, looking at two cohorts of 12,146 ninth-grade and 6002 seventh-grade students from the German National Educational Panel Study. In both grade levels, multilevel logistic mediation models show that students' conscientiousness indirectly predicts grade retention through the assignment of grades by teachers. In the ninth-grade sample, students' conscientiousness was additionally a direct predictor of retention, distinct from teacher-assigned grades. We discuss potential underlying mechanisms and explore whether teachers base their decisions on different indicators when retaining seventh-grade students or ninth-grade students.
Both the C-13 chemical shift and the calculated anisotropy effect (spatial magnetic properties) of the electron-deficient centre of stable, crystalline, and structurally characterized carbenes have been employed to unequivocally characterize potential resonance contributors to the present mesomerism (carbene, ylide, betaine, and zwitter ion) and to determine quantitatively the electron deficiency of the corresponding carbene carbon atom. Prior to that, both structures and C-13 chemical shifts were calculated and compared with the experimental delta(C-13)/ppm values and geometry parameters (as a quality criterion for obtained structures).
Modern web browsers are digital software platforms, as they allow third-parties to extend functionality by providing extensions. Given the intense competition, differentiation through provided functionality is a key factor for browser platforms. As browsers progress, they constantly release new features. Browsers might thereby enter complementary markets if they add functionality formerly provided by third-party extensions, which is referred to as ‘platform coring’. Previous studies missed the perspective of the involved parties. To address this gap, we conduct interviews with third-party and core developers in the security and privacy domain from Firefox and Chrome. In essence, the study provides three contributions. First, insights into stakeholder-specific issues concerning coring. Second, measures to prevent coring. Third, strategical guidance for developers and owners. Third-parties experienced and core developers acknowledged coring to occur on browser platforms. While developers with extrinsic motivations assess coring negatively, developers with intrinsic motivations perceive coring positively.
The “output-orientation” is omnipresent in teacher education. In order to evaluate teachers' and students' performances, a wide range of different quantitative questionnaires exist worldwide. One important goal of teaching evaluation is to increase the quality of teaching and learning. The author argues, that standard evaluations which are typically made at the end of the semester are problematic due to two reasons. The first one is that some of the questions are too general and don`t offer concrete ideas as to what kind of actions can be taken to make the courses better. The second problem is that the evaluation is mostly made when the course is already over. Because of this criticism, Apelojg invented the Felix-App which offers the possibility to give feedback in real-time by asking for the emotions and needs that occur during different learning situations. The idea is very simple: positive emotions and satisfied needs are helpful for the learning process. Negative emotions and unsatisfied needs have negative effects on the learning process. First descriptive results show, that “managing emotions” during classes can have positive effects on both motivation and emotions.
The social stratification systems of major cities are transforming all around the globe. International research has been discussing this trend and focus on changing occupational classes. However, the precise effects on urban households, taking social welfare and different family arrangements into account, as well as the precise effects on people with a migration background, remain unclear. Using the example of Vienna, this article examines immigration as a key dimension for social stratification. Although household income structures in Austria have remained comparatively stable over the past two decades, the middle-income share in Vienna (as the sole metropolis in Austria) has dramatically decreased. This predominantly affects people from migrant backgrounds. Using a comprehensive dataset (two waves, N = 16,700 participants, including N = 4,500 migrants), we systematically examine the role of (a) migration-specific and (b) education- and employment-related factors to explain the decline of middle-income migrants. The results of multinomial logistic regression and decomposition analyses suggest that transformations in the labour market is the main driving force. Changing migrant characteristics have counteracted this process. If today's migrants displayed similar showed characteristics (e.g., origin and educational levels) to those prevalent in the past decade, the ethnic stratification disparities would have been even stronger.
Labour market entry poses enormous challenges for recently arrived refugees, ranging from language barriers, devaluation of human capital, unfamiliarity with customs of the job search process to outright discrimination. How can refugees overcome these challenges and quickly enter gainful employment? In this paper, we draw on interviews with 26 male and female refugees from Syria, Afghanistan, Iraq and Iran, conducted in 2017 and 2018, who came to Austria in 2015 and 2014 and who have successfully entered employment. We depict refugees’ own perspectives on and strategies for fast job entry and integration. Personal agency and a proactive approach of seeking and seizing opportunities are key for overcoming initial barriers and entering upon positive integration pathways. At the same time, refugees’ personal agency is essential for establishing social ties to the host society, which also play a crucial role in early labour market integration. Finally, institutions of the Austrian labour market (the ‘apprenticeship’-system) interact with refugees’ agency in most intricate ways, both setting up nearly insurmountable barriers but also providing specific opportunities for refugees.
Although the low-wage employment sector has enlarged over the past 20 years in the context of pronounced flexibility in restructured labor markets, gender differences in low-wage employment have declined in Germany, Austria and Switzerland. In this article, the authors examine reasons for declining gender inequalities, and most notably concentrate on explanations for the closing gender gap in low-wage employment risks. In addition, they identify differences and similarities among the German-speaking countries. Based on regression techniques and decomposition analyses (1996-2016), the authors find significantly decreasing labor market risks for the female workforce. Detailed analysis reveals that (1) the concrete positioning in the labor market shows greater importance in explaining declining gender differences compared to personal characteristics. (2) The changed composition of the labor markets has prevented the low-wage sector from increasing even more in general and works in favor of the female workforce and their low-wage employment risks in particular.
Unexpected perturbations during locomotion can occur during daily life or sports performance. Adequate compensation for such perturbations is crucial in maintaining effective postural control. Studies utilising instrumented treadmills have previously validated perturbed walking protocols, however responses to perturbed running protocols remain less investigated. Therefore, the purpose of this study was to investigate the feasibility of a new instrumented treadmill-perturbed running protocol. <br /> Fifteen participants (age = 2 8 +/- 3 years; height = 172 +/- 9 cm; weight = 69 +/- 10 kg; 60% female) completed an 8-minute running protocol at baseline velocity of 2.5 m/s (9 km/h), whilst 15 one-sided belt perturbations were applied (pre-set perturbation characteristics: 150 ms delay (post-heel contact); 2.0 m/s amplitude; 100 ms duration). Perturbation characteristics and EMG responses were recorded. Bland-Altman analysis (BLA) was employed (bias +/- limits of agreement (LOA; bias +/- 1.96*SD)) and intra-individual variability of repeated perturbations was assessed via Coefficients of Variation (CV) (mean +/- SD). <br /> On average, 9.4 +/- 2.2 of 15 intended perturbations were successful. Perturbation delay was 143 +/- 10 ms, amplitude was 1.7 +/- 0.2 m/s and duration was 69 +/- 10 ms. BLA showed -7 +/- 13 ms for delay, -0.3 +/- 0.1 m/s for amplitude and -30 +/- 10 ms for duration. CV showed variability of 19 +/- 4.5% for delay, 58 +/- 12% for amplitude and 30 +/- 7% for duration. EMG RMS amplitudes of the legs and trunk ranged from 113 +/- 25% to 332 +/- 305% when compared to unperturbed gait. This study showed that the application of sudden perturbations during running can be achieved, though with increased variability across individuals. The perturbations with the above characteristics appear to have elicited a neuromuscular response during running.
Using dating apps has become popular for many young adults worldwide, promising the chance to meet new sexual partners. Because there is evidence that using dating apps may be associated with risky sexual behavior, this study compared users and non-users concerning their sexuality-related cognitions, namely their risky sexual scripts and sexual self-esteem, as well as their risky and sexually assertive behavior. It also explored the link between dating app use and acceptance of sexual coercion. A total of 491 young heterosexual adults (295 female) participated in an online survey advertised in social media and college libraries in Germany. Results indicated that users had more risky sexual scripts and reported more risky sexual behavior than non-users. Furthermore, male dating app users had lower sexual self-esteem and higher acceptance of sexual coercion than male non-users. In both gender groups, dating app use predicted casual sexual activity via a more risky casual sex script. Gender differences, potential underlying mechanisms, and directions for future research are discussed.
Conservation genetics can provide data needed by conservation practitioners for their decisions regarding the management of vulnerable or endangered species, such as the sun bear Helarctos malayanus. Throughout its range, the sun bear is threatened by loss and fragmentation of its habitat and the illegal trade of both live bears and bear parts. Sharply declining population numbers and population sizes, and a lack of natural dispersal between populations all threaten the genetic diversity of the remaining populations of this species. In this first population genetics study of sun bears using microsatellite markers, we analyzed 68 sun bear samples from Cambodia to investigate population structure and genetic diversity. We found evidence for two genetically distinct populations in the West and East of Cambodia. Ongoing or recent gene flow between these populations does not appear sufficient to alleviate loss of diversity in these populations, one of which (West Cambodia) is characterized by significant inbreeding. We were able to assign 85% of sun bears of unknown origin to one of the two populations with high confidence (assignment probability >= 85%), providing valuable information for future bear reintroduction programs. Further, our results suggest that developed land (mostly agricultural mosaics) acts as a barrier to gene flow for sun bears in Cambodia. We highlight that regional sun bear conservation action plans should consider promoting population connectivity and enforcing wildlife protection of this threatened species.
Speaking a late-learned second language (L2) is supposed to yield more variable and less consistent output than speaking one’s first language (L1), particularly with respect to reliably adhering to grammatical morphology. The current study investigates both internal processes involved in encoding morphologically complex words – by recording event-related brain potentials (ERPs) during participants’ silent productions – and the corresponding overt output. We specifically examined compounds with plural or singular modifiers in English. Thirty-one advanced L2 speakers of English (L1: German) were compared to a control group of 20 L1 English speakers from an earlier study. We found an enhanced (right-frontal) negativity during (silent) morphological encoding for compounds produced from regular plural forms relative to compounds formed from irregular plurals, replicating the ERP effect obtained for the L1 group. The L2 speakers’ overt productions, however, were significantly less consistent than those of the L1 speakers on the same task. We suggest that L2 speakers employ the same mechanisms for morphological encoding as L1 speakers, but with less reliance on grammatical constraints than L1 speakers.
User-generated content on social media platforms is a rich source of latent information about individual variables. Crawling and analyzing this content provides a new approach for enterprises to personalize services and put forward product recommendations. In the past few years, brands made a gradual appearance on social media platforms for advertisement, customers support and public relation purposes and by now it became a necessity throughout all branches. This online identity can be represented as a brand personality that reflects how a brand is perceived by its customers. We exploited recent research in text analysis and personality detection to build an automatic brand personality prediction model on top of the (Five-Factor Model) and (Linguistic Inquiry and Word Count) features extracted from publicly available benchmarks. The proposed model reported significant accuracy in predicting specific personality traits form brands. For evaluating our prediction results on actual brands, we crawled the Facebook API for 100k posts from the most valuable brands' pages in the USA and we visualize exemplars of comparison results and present suggestions for future directions.
This paper investigates the applicability of CMOS decoupling cells for mitigating the Single Event Transient (SET) effects in standard combinational gates. The concept is based on the insertion of two decoupling cells between the gate's output and the power/ground terminals. To verify the proposed hardening approach, extensive SPICE simulations have been performed with standard combinational cells designed in IHP's 130 nm bulk CMOS technology. Obtained simulation results have shown that the insertion of decoupling cells results in the increase of the gate's critical charge, thus reducing the gate's soft error rate (SER). Moreover, the decoupling cells facilitate the suppression of SET pulses propagating through the gate. It has been shown that the decoupling cells may be a competitive alternative to gate upsizing and gate duplication for hardening the gates with lower critical charge and multiple (3 or 4) inputs, as well as for filtering the short SET pulses induced by low-LET particles.
Enhancement of human induced pluripotent stem cells adhesion through multilayer laminin coating
(2019)
Bioengineered cell substrates are a highly promising tool to govern the differentiation of stem cells in vitro and to modulate the cellular behavior in vivo. While this technology works fine for adult stem cells, the cultivation of human induced pluripotent stem cells (hiPSCs) is challenging as these cells typically show poor attachment on the bioengineered substrates, which among other effects causes substantial cell death. Thus, very limited types of surfaces have been demonstrated suitable for hiPSC cultures. The multilayer coating approach that renders the surface with diverse chemical compositions, architectures, and functions can be used to improve the adhesion of hiPSCs on the bioengineered substrates. We hypothesized that a multilayer formation based on the attraction of molecules with opposite charges could functionalize the polystyrene (PS) substrates to improve the adhesion of hiPSCs. Polymeric substrates were stepwise coated, first with dopamine to form a polydopamine (PDA) layer, second with polylysine and last with Laminin-521. The multilayer formation resulted in the variation of hydrophilicity and chemical functionality of the surfaces. Hydrophilicity was detected using captive bubble method and the amount of primary and secondary amines on the surface was quantified by fluorescent staining. The PDA layer effectively immobilized the upper layers and thereby improved the attachment of hiPSCs. Cell adhesion was enhanced on the surfaces coated with multilayers, as compared to those without PDA and/or polylysine. Moreover, hiPSCs spread well over this multilayer laminin substrate. These cells maintained their proliferation capacity and differentiation potential. The multilayer coating strategy is a promising attempt for engineering polymer-based substrates for the cultivation of hiPSCs and of interest for expanding the application scope of hiPSCs.
In cloud computing, users are able to use their own operating system (OS) image to run a virtual machine (VM) on a remote host. The virtual machine OS is started by the user using some interfaces provided by a cloud provider in public or private cloud. In peer to peer cloud, the VM is started by the host admin. After the VM is running, the user could get a remote access to the VM to install, configure, and run services. For the security reasons, the user needs to verify the integrity of the running VM, because a malicious host admin could modify the image or even replace the image with a similar image, to be able to get sensitive data from the VM. We propose an approach to verify the integrity of a running VM on a remote host, without using any specific hardware such as Trusted Platform Module (TPM). Our approach is implemented on a Linux platform where the kernel files (vmlinuz and initrd) could be replaced with new files, while the VM is running. kexec is used to reboot the VM with the new kernel files. The new kernel has secret codes that will be used to verify whether the VM was started using the new kernel files. The new kernel is used to further measuring the integrity of the running VM.
Monitoring is a key prerequisite for self-adaptive software and many other forms of operating software. Monitoring relevant lower level phenomena like the occurrences of exceptions and diagnosis data requires to carefully examine which detailed information is really necessary and feasible to monitor. Adaptive monitoring permits observing a greater variety of details with less overhead, if most of the time the MAPE-K loop can operate using only a small subset of all those details. However, engineering such an adaptive monitoring is a major engineering effort on its own that further complicates the development of self-adaptive software. The proposed approach overcomes the outlined problems by providing generic adaptive monitoring via runtime models. It reduces the effort to introduce and apply adaptive monitoring by avoiding additional development effort for controlling the monitoring adaptation. Although the generic approach is independent from the monitoring purpose, it still allows for substantial savings regarding the monitoring resource consumption as demonstrated by an example.
The emergence of cloud computing allows users to easily host their Virtual Machines with no up-front investment and the guarantee of always available anytime anywhere. But with the Virtual Machine (VM) is hosted outside of user's premise, the user loses the physical control of the VM as it could be running on untrusted host machines in the cloud. Malicious host administrator could launch live memory dumping, Spectre, or Meltdown attacks in order to extract sensitive information from the VM's memory, e.g. passwords or cryptographic keys of applications running in the VM. In this paper, inspired by the moving target defense (MTD) scheme, we propose a novel approach to increase the security of application's sensitive data in the VM by continuously moving the sensitive data among several memory allocations (blocks) in Random Access Memory (RAM). A movement function is added into the application source code in order for the function to be running concurrently with the application's main function. Our approach could reduce the possibility of VM's sensitive data in the memory to be leaked into memory dump file by 2 5% and secure the sensitive data from Spectre and Meltdown attacks. Our approach's overhead depends on the number and the size of the sensitive data.
Leveraging spatio-temporal soccer data to define a graphical query language for game recordings
(2019)
For professional soccer clubs, performance and video analysis are an integral part of the preparation and post-processing of games. Coaches, scouts, and video analysts extract information about strengths and weaknesses of their team as well as opponents by manually analyzing video recordings of past games. Since video recordings are an unstructured data source, it is a complex and time-intensive task to find specific game situations and identify similar patterns. In this paper, we present a novel approach to detect patterns and situations (e.g., playmaking and ball passing of midfielders) based on trajectory data. The application uses the metaphor of a tactic board to offer a graphical query language. With this interactive tactic board, the user can model a game situation or mark a specific situation in the video recording for which all matching occurrences in various games are immediately displayed, and the user can directly jump to the corresponding game scene. Through the additional visualization of key performance indicators (e.g.,the physical load of the players), the user can get a better overall assessment of situations. With the capabilities to find specific game situations and complex patterns in video recordings, the interactive tactic board serves as a useful tool to improve the video analysis process of professional sports teams.
Rapid advances in location-acquisition technologies have led to large amounts of trajectory data. This data is the foundation for a broad spectrum of services driven and improved by trajectory data mining. However, for hybrid transactional and analytical workloads, the storing and processing of rapidly accumulated trajectory data is a non-trivial task. In this paper, we present a detailed survey about state-of-the-art trajectory data management systems. To determine the relevant aspects and requirements for such systems, we developed a trajectory data mining framework, which summarizes the different steps in the trajectory data mining process. Based on the derived requirements, we analyze different concepts to store, compress, index, and process spatio-temporal data. There are various trajectory management systems, which are optimized for scalability, data footprint reduction, elasticity, or query performance. To get a comprehensive overview, we describe and compare different exciting systems. Additionally, the observed similarities in the general structure of different systems are consolidated in a general blueprint of trajectory management systems.
Speaking the Unspeakable
(2019)
This article discusses the filmic representation of the infamous Wannsee Conference, when fifteen senior German officials met at a villa on the shore of a Berlin lake to discuss and co-ordinate the
implementation of the so-called final solution to the Jewish question. The understanding reached during the course of the ninety-minute meeting cleared the way for the Europe-wide killing of six million Jews. The article sets out to answer the principal challenge facing
anyone attempting to recreate the Wannsee Conference on film: what was the atmosphere of this conference and the attitude of the participants? Moreover, it discusses various ethical aspects related to the portrayal of evil, not in actions but in words, using the medium of film. In doing so, it focuses on the BBC/HBO television film Conspiracy (2001), directed by Frank Pierson, probing its historical accuracy and discussing its artistic credibility.
A digital filter is introduced which treats the problem of predictability versus time averaging in a continuous, seamless manner. This seamless filter (SF) is characterized by a unique smoothing rule that determines the strength of smoothing in dependence on lead time. The rule needs to be specified beforehand, either by expert knowledge or by user demand. As a result, skill curves are obtained that allow a predictability assessment across a whole range of time-scales, from daily to seasonal, in a uniform manner. The SF is applied to downscaled SEAS5 ensemble forecasts for two focus regions in or near the tropical belt, the river basins of the Karun in Iran and the Sao Francisco in Brazil. Both are characterized by strong seasonality and semi-aridity, so that predictability across various time-scales is in high demand. Among other things, it is found that from the start of the water year (autumn), areal precipitation is predictable with good skill for the Karun basin two and a half months ahead; for the Sao Francisco it is only one month, longer-term prediction skill is just above the critical level.
Excellent conversion efficiencies of over 20% and facile cell production have placed hybrid perovskites at the forefront of novel solar cell materials, with CH3NH3PbI3 being an archetypal compound. The question why CH3NH3PbI3 has such extraordinary characteristics, particularly a very efficient power conversion from absorbed light to electrical power, is hotly debated, with ferroelectricity being a promising candidate. This does, however, require the crystal structure to be non-centrosymmetric and we herein present crystallographic evidence as to how the symmetry breaking occurs on a crystallographic and, therefore, long-range level. Although the molecular cation CH3NH3+ is intrinsically polar, it is heavily disordered and this cannot be the sole reason for the ferroelectricity. We show that it, nonetheless, plays an important role, as it distorts the neighboring iodide positions from their centrosymmetric positions.
Background: The distribution of pronouns varies cross-linguistically. This distribution has led to conflicting results in studies that investigated pronoun resolution in agrammatic indviduals. In the investigation of pronominal resolution, the linguistic phenomenon of "resumption" is understudied in agrammatism. The construction of pronominal resolution in Akan presents the opportunity to thoroughly examine resumption. Aims: To start, the present study examines the production of (pronominal) resumption in Akan focus constructions (who-questions and focused declaratives). Second, we explore the effect of grammatical tone on the processing of pronominal (resumption) since Akan is a tonal language. Methods & Procedures: First, we tested the ability to distinguish linguistic and non-linguistic tone in Akan agrammatic speakers. Then, we administered an elicitation task to five Akan agrammatic individuals, controlling for the structural variations in the realization of resumption: focused who-questions and declaratives with (i) only a resumptive pronoun, (ii) only a clause determiner, (iii) a resumptive pronoun and a clause determiner co-occurring, and (iv) neither a resumptive pronoun nor a clause determiner. Outcomes & Results: Tone discrimination .both for pitch and for lexical tone was unimpaired. The production task demonstrated that the production of resumptive pronouns and clause determiners was intact. However, the production of declarative sentences in derived word order was impaired; wh-object questions were relatively well-preserved. Conclusions: We argue that the problems with sentence production are highly selective: linguistic tones and resumption are intact but word order is impaired in non-canonical declarative sentences.
Entrepreneurial persistence is demonstrated by an entrepreneur’s continued positive maintenance of entrepreneurial motivation and constantly renewed active engagement in a new business venture despite counterforces or enticing alternatives. It thus is a crucial factor for entrepreneurs when pursuing and exploiting their business opportunities and in realizing potential economic gains and benefits. Using rich data on a representative sample of German business founders, we investigated the determinants of entrepreneurial persistence. Next to observed survival, we also constructed a hybrid persistence measure capturing the motivational dimension of persistence. We analyzed the influence of individual-level (human capital and personality) and business-related characteristics on both measures as well as their relative importance. We found that the two indicators emphasize different aspects of persistence. For the survival indicator, the predictive power was concentrated in business characteristics and human capital, while for hybrid persistence the dominant factors were business characteristics and personality. Finally, we showed that results were heterogeneous across subgroups. In particular, formerly unemployed founders did not differ in survival chances, but they were more likely to lack a high psychological commitment to their business ventures.
The geochemical composition of oceanic basalts provides us with a window into the distribution of geochemical elements within the Earth’s mantle in space and time. In conjunction with a throughout knowledge on how the different elements behave e.g. during melt formation and evolution or on their partition behaviour between e.g. minerals and melts this information has been transformed into various models on how oceanic crust is formed along plume influenced or normal mid-ocean ridge segments, how oceanic crust evolves in response to seawater, on subduction recycling of oceanic crust and so forth. The work presented in this habilitation was aimed at refining existing models, putting further constraints on some of the major open questions in this field of research while at the same time trying to increase our knowledge on the behaviour of noble gases as a tracer for melt formation and evolution processes. In the line of this work the author and her co-workers were able to answer one of the major questions concerning the formation of oceanic crust along plume-influenced ridges – in which physical state does the plume material enter the ridge? Based on submarine volcanic glass He, Ne and Ar data, the author and her co-workers have shown that the interaction of mantle plumes with mid-ocean ridges occurs in the physical form of melts. In addition, the author and her co-workers have also put further constraints on one of the major questions concerning the formation of oceanic crust along normal mid-ocean ridges – namely how is the mid-ocean ridge system effectively cooled to form the lower oceanic crust? Based on Ne and Ar data in combination with Cl/K ratios of basaltic glass from the Mid-Atlantic ridge and estimates of crystallisation pressures they have shown, that seawater penetration reaches lower crustal levels close to the Moho, indicating that hydrothermal circulation might be an effective cooling mechanism even for the deep parts of the oceanic crust. Considering subduction recycling, the heterogeneity of the Earth’s mantle and mantle dynamic processes the key question is on which temporal and spatial scales is the Earth’s mantle geochemically heterogeneous? In the line of this work the author along with her co-workers have shown based on Cl/K ratios in conjunction with the Sr, Nd, and Pb isotopes of the OIBs representing the type localities for the different mantle endmembers that the quantity of Cl recycled into the mantle via subduction is not uniform and that neither the HIMU nor the EM1 and EM2 mantle components can be considered as distinct mantle endmembers. In addition, we have shown, based on He, Ne and Ar isotope and trace-element data from the Foundation hotspot that the near ridge seamounts of the Foundation seamount chain formed by the Foundation hotspot erupt lavas with a trace-element signature clearly characteristic of oceanic gabbro which indicates the existence of recycled, virtually unchanged lower oceanic crust in the plume source. This is a clear sign of the inefficiency of the stirring mechanism existing at mantle depth. Similar features are seen in other near-axis hotspot magmas around the world. Based on He, Sr, Nd, Pb and O isotopes and trace elements in primitive mafic dykes from the Etendeka flood basalts, NW Namibia the author along with her co-workers have shown that deep, less degassed mantle material carried up by a mantle plume contributed significantly to the flood basalt magmatism. The Etendeka flood basalts are part of the South Atlantic LIP, which is associated with the breakup of Gondwana, the formation of the Paraná-Etendeka flood basalts and the Walvis Ridge - Tristan da Cunha hotspot track. Thus reinforcing the lately often-challenged concept of mantle plumes and the role of mantle plumes in the formation of large igneous provinces. Studying the behaviour of noble gases during melt formation and evolution the author along with her co-workers has shown that He can be considerable more susceptible to changes during melt formation and evolution resulting not only in a complete decoupling of He isotopes from e.g. Ne or Pb isotopes but also in a complete loss of the primary mantle isotope signal. They have also shown that this decoupling occurs mainly during the melt formation processes requiring He to be more compatible during mantle melting than Ne. In addition, the author along with her co workers were able to show that incorporation of atmospheric noble gases into igneous rocks is in general a two-step process: (1) magma contamination by assimilation of altered oceanic crust results in the entrainment of air-equilibrated seawater noble gases; (2) atmospheric noble gases are adsorbed onto grain surfaces during sample preparation. This implies, considering the ubiquitous presence of the contamination signal, that magma contamination by assimilation of a seawater-sourced component is an integral part of mid-ocean ridge basalt evolution.
While the underlying mechanisms of Parkinson’s disease (PD) are still insufficiently studied, a complex interaction between genetic and environmental factors is emphasized. Nevertheless, the role of the essential trace element zinc (Zn) in this regard remains controversial. In this study we altered Zn balance within PD models of the versatile model organism Caenorhabditis elegans (C. elegans) in order to examine whether a genetic predisposition in selected genes with relevance for PD affects Zn homeostasis. Protein-bound and labile Zn species act in various areas, such as enzymatic catalysis, protein stabilization pathways and cell signaling. Therefore, total Zn and labile Zn were quantitatively determined in living nematodes as individual biomarkers of Zn uptake and bioavailability with inductively coupled plasma tandem mass spectrometry (ICP-MS/MS) or a multi-well method using the fluorescent probe ZinPyr-1. Young and middle-aged deletion mutants of catp-6 and pdr-1, which are orthologues of mammalian ATP13A2 (PARK9) and parkin (PARK2), showed altered Zn homeostasis following Zn exposure compared to wildtype worms. Furthermore, age-specific differences in Zn uptake were observed in wildtype worms for total as well as labile Zn species. These data emphasize the importance of differentiation between Zn species as meaningful biomarkers of Zn uptake as well as the need for further studies investigating the role of dysregulated Zn homeostasis in the etiology of PD.
Determinations of the ultraviolet (UV) luminosity function of active galactic nuclei (AGN) at high redshifts are important for constraining the AGN contribution to reionization and understanding the growth of supermassive black holes. Recent inferences of the luminosity function suffer from inconsistencies arising from inhomogeneous selection and analysis of data. We address this problem by constructing a sample of more than 80 000 colour-selected AGN from redshift z= 0 to 7.5 using multiple data sets homogenized to identical cosmologies, intrinsic AGN spectra, and magnitude systems. Using this sample, we derive the AGN UV luminosity function from redshift z= 0 to 7.5. The luminosity function has a double power-law form at all redshifts. The break magnitude M-* shows a steep brightening from M-* similar to -24 at z = 0.7 to M-* similar to -29 at z = 6. The faint-end slope beta significantly steepens from -1.9 at z < 2.2 to -2.4 at z similar or equal to 6. In spite of this steepening, the contribution of AGN to the hydrogen photoionization rate at z similar to 6 is subdominant (< 3 per cent), although it can be non-negligible (similar to 10 per cent) if these luminosity functions hold down to M-1450 = -18. Under reasonable assumptions, AGN can reionize He II by redshift z = 2.9. At low redshifts (z < 0.5), AGN can produce about half of the hydrogen photoionization rate inferred from the statistics of HI absorption lines in the intergalactic medium. Our analysis also reveals important systematic errors in the data, which need to be addressed and incorporated in the AGN selection function in future in order to improve our results. We make various fitting functions, codes, and data publicly available.
Low-dimensional dynamics for higher-order harmonic, globally coupled phase-oscillator ensembles
(2019)
The Kuramoto model, despite its popularity as a mean-field theory for many synchronization phenomenon of oscillatory systems, is limited to a first-order harmonic coupling of phases. For higher-order coupling, there only exists a low-dimensional theory in the thermodynamic limit. In this paper, we extend the formulation used by Watanabe and Strogatz to obtain a low-dimensional description of a system of arbitrary size of identical oscillators coupled all-to-all via their higher-order modes. To demonstrate an application of the formulation, we use a second harmonic globally coupled model, with a mean-field equal to the square of the Kuramoto mean-field. This model is known to exhibit asymmetrical clustering in previous numerical studies. We try to explain the phenomenon of asymmetrical clustering using the analytical theory developed here, as well as discuss certain phenomena not observed at the level of first-order harmonic coupling.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire sun in Hα, Ca ii K, and He i 10 830 Å. We present a new calibration method that includes limb-darkening correction, removal of nonuniform filter transmission, and determination of He i Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display nonuniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope's alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He i triplet facilitates the determination of chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to cotemporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca ii K data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He i triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than "correct" object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, where a speaker needs to successfully refer to a novel object in an image. Inspired by models of "rational speech acts", we extend a neural generator to become a pragmatic speaker reasoning about uncertain object categories. As a result of this reasoning, the generator produces fewer nouns and names of distractor categories as compared to a literal speaker. We show that this conversational strategy for dealing with novel objects often improves communicative success, in terms of resolution accuracy of an automatic listener.
Short period double degenerate white dwarf (WD) binaries with periods of less than similar to 1 day are considered to be one of the likely progenitors of type Ia supernovae. These binaries have undergone a period of common envelope evolution. If the core ignites helium before the envelope is ejected, then a hot subdwarf remains prior to contracting into a WD. Here we present a comparison of two very rare systems that contain two hot subdwarfs in short period orbits. We provide a quantitative spectroscopic analysis of the systems using synthetic spectra from state-of-the-art non-LTE models to constrain the atmospheric parameters of the stars. We also use these models to determine the radial velocities, and thus calculate dynamical masses for the stars in each system.
Already for decades it has been known that the winds of massive stars are inhomogeneous (i.e. clumped). To properly model observed spectra of massive star winds it is necessary to incorporate the 3-D nature of clumping into radiative transfer calculations. In this paper we present our full 3-D Monte Carlo radiative transfer code for inhomogeneous expanding stellar winds. We use a set of parameters to describe dense as well as the rarefied wind components. At the same time, we account for non-monotonic velocity fields. We show how the 3-D density and velocity wind inhomogeneities strongly affect the resonance line formation. We also show how wind clumping can solve the discrepancy between P v and H alpha mass-loss rate diagnostics.
The ability to work in teams is an important skill in today's work environments. In MOOCs, however, team work, team tasks, and graded team-based assignments play only a marginal role. To close this gap, we have been exploring ways to integrate graded team-based assignments in MOOCs. Some goals of our work are to determine simple criteria to match teams in a volatile environment and to enable a frictionless online collaboration for the participants within our MOOC platform. The high dropout rates in MOOCs pose particular challenges for team work in this context. By now, we have conducted 15 MOOCs containing graded team-based assignments in a variety of topics. The paper at hand presents a study that aims to establish a solid understanding of the participants in the team tasks. Furthermore, we attempt to determine which team compositions are particularly successful. Finally, we examine how several modifications to our platform's collaborative toolset have affected the dropout rates and performance of the teams.
While the IEEE 802.15.4 radio standard has many features that meet the requirements of Internet of things applications, IEEE 802.15.4 leaves the whole issue of key management unstandardized. To address this gap, Krentz et al. proposed the Adaptive Key Establishment Scheme (AKES), which establishes session keys for use in IEEE 802.15.4 security. Yet, AKES does not cover all aspects of key management. In particular, AKES comprises no means for key revocation and rekeying. Moreover, existing protocols for key revocation and rekeying seem limited in various ways. In this paper, we hence propose a key revocation and rekeying protocol, which is designed to overcome various limitations of current protocols for key revocation and rekeying. For example, our protocol seems unique in that it routes around IEEE 802.15.4 nodes whose keys are being revoked. We successfully implemented and evaluated our protocol using the Contiki-NG operating system and aiocoap.
Background: The polymorphism in FTO gene (rs9939609) is known to be associated with higher BMI and body fat mass content. However, environmental factors can modify this effect. The purpose of the present study was to investigate an association between sport specialization and the rs9939609 SNP in FTO gene in the cohort of professional and amateur young athletes. Methods: A total number of 250 young individuals 8-18 years old living in Moscow or Moscow district participated in the study. Individuals were divided into 3 groups in accordance with their physical activity level: control group (n = 49), amateurs (n = 67) and professionals (n = 137). Amateur and professional athletes were subdivided into groups according to their sport specialization. Quantile regression was used as a regression model, where the dependent (outcome) variable was BMI, along with percentage of body fat mass, and the independent variables (predictors) were the rs9939609 SNP in FTO gene, physical activity (active versus inactive), sport specialization (aerobic, intermittent sports and martial arts), nationality, level of sport experience (in years), gender and percentage of free fat mass content. Results: The regression analysis revealed that physical activity and sport specialization had greater impact compared to FTO allele in the group of physically active individuals. Physical activity, in particular aerobic, had negative associations with body fat mass and BMI. The rs9939609 SNP in FTO gene is associated with physical activity and aerobic activity. The magnitude of association becomes significantly larger at the upper quantiles of the body fat mass distribution. Conclusion: Physical activity and sport specialization explained more variance in body composition of physically active young individuals compared to the FTO polymorphism. Effect of interaction of physical activity, in particular aerobic, with the FTO polymorphism on body composition of young athletes was found.
A central claim by Hoerl & McCormack is that the temporal reasoning system is uniquely human. But why exactly? This commentary evaluates two possible options to justify the thesis that temporal reasoning is uniquely human, one based on considerations regarding agency and the other based on language. The commentary raises problems for both of these options.
Lanthanide-doped upconverting nanoparticles (UCNP) are being extensively studied for bioapplications due to their unique photoluminescence properties and low toxicity. Interest in RET applications involving UCNP is also increasing, but due to factors such as large sizes, ion emission distributions within the particles, and complicated energy transfer processes within the UCNP, there are still many questions to be answered. In this study, four types of core and core-shell NaYF4-based UCNP co-doped with Yb3+ and Tm3+ as sensitizer and activator, respectively, were investigated as donors for the Methyl 5-(8-decanoylbenzo[1,2-d:4,5-d ']bis([1,3]dioxole)-4-yl)-5-oxopentanoate (DBD-6) dye. The possibility of resonance energy transfer (RET) between UCNP and the DBD-6 attached to their surface was demonstrated based on the comparison of luminescence intensities, band ratios, and decay kinetics. The architecture of UCNP influenced both the luminescence properties and the energy transfer to the dye: UCNP with an inert shell were the brightest, but their RET efficiency was the lowest (17%). Nanoparticles with Tm3+ only in the shell have revealed the highest RET efficiencies (up to 51%) despite the compromised luminescence due to surface quenching.
A new micro/mesoporous hybrid clay nanocomposite prepared from kaolinite clay, Carica papaya seeds, and ZnCl2 via calcination in an inert atmosphere is presented. Regardless of the synthesis temperature, the specific surface area of the nanocomposite material is between approximate to 150 and 300 m(2)/g. The material contains both micro- and mesopores in roughly equal amounts. X-ray diffraction, infrared spectroscopy, and solid-state nuclear magnetic resonance spectroscopy suggest the formation of several new bonds in the materials upon reaction of the precursors, thus confirming the formation of a new hybrid material. Thermogravimetric analysis/differential thermal analysis and elemental analysis confirm the presence of carbonaceous matter. The new composite is stable up to 900 degrees C and is an efficient adsorbent for the removal of a water micropollutant, 4-nitrophenol, and a pathogen, E. coli, from an aqueous medium, suggesting applications in water remediation are feasible.
Alluvial and transport-limited bedrock rivers constitute the majority of fluvial systems on Earth. Their long profiles hold clues to their present state and past evolution. We currently possess first-principles-based governing equations for flow, sediment transport, and channel morphodynamics in these systems, which we lack for detachment-limited bedrock rivers. Here we formally couple these equations for transport-limited gravel-bed river long-profile evolution. The result is a new predictive relationship whose functional form and parameters are grounded in theory and defined through experimental data. From this, we produce a power-law analytical solution and a finite-difference numerical solution to long-profile evolution. Steady-state channel concavity and steepness are diagnostic of external drivers: concavity decreases with increasing uplift rate, and steepness increases with an increasing sediment-to-water supply ratio. Constraining free parameters explains common observations of river form: to match observed channel concavities, gravel-sized sediments must weather and fine - typically rapidly - and valleys typically should widen gradually. To match the empirical square-root width-discharge scaling in equilibrium-width gravel-bed rivers, downstream fining must occur. The ability to assign a cause to such observations is the direct result of a deductive approach to developing equations for landscape evolution.
The size structure of autotroph communities - the relative abundance of small vs. large individuals - shapes the functioning of ecosystems. Whether common mechanisms underpin the size structure of unicellular and multicellular autotrophs is, however, unknown. Using a global data compilation, we show that individual body masses in tree and phytoplankton communities follow power-law distributions and that the average exponents of these individual size distributions (ISD) differ. Phytoplankton communities are characterized by an average ISD exponent consistent with three-quarter-power scaling of metabolism with body mass and equivalence in energy use among mass classes. Tree communities deviate from this pattern in a manner consistent with equivalence in energy use among diameter size classes. Our findings suggest that whilst universal metabolic constraints ultimately underlie the emergent size structure of autotroph communities, divergent aspects of body size (volumetric vs. linear dimensions) shape the ecological outcome of metabolic scaling in forest vs. pelagic ecosystems.
By using an integrative approach, we describe a new species of mayfly, Bungona (Chopralla) pontica sp. n., from Turkey. The discovery of a representative of the tropical mayfly genus Bungona in the Middle East is rather unexpected. The new species shows all the main morphological characters of the subgenus Chopralla, which has its closest related species occurring in southeastern Asia. Barcoding clearly indicated that the new species represents an independent lineage isolated for a very long time from other members of the complex. The claw is equipped with two rows of three or four flattened denticles. This condition is a unique feature of Bungona (Chopralla) pontica sp. n. among West Palaearctic mayfly species. Within the subgenus Chopralla, the species can be identified by the presence of a simple, not bifid right prostheca (also present only in Bungona (Chopralla) liebenauae (Soldan, Braasch & Muu, 1987)), the shape of the labial palp, and the absence of protuberances on pronotum.