Refine
Year of publication
- 2022 (1367) (remove)
Document Type
- Article (1367) (remove)
Language
- English (1367) (remove)
Keywords
- climate change (13)
- COVID-19 (11)
- machine learning (11)
- mental health (9)
- Germany (8)
- analysis (8)
- permafrost (8)
- Tolkien (7)
- adolescents (7)
- depression (7)
Institute
- Institut für Physik und Astronomie (243)
- Institut für Biochemie und Biologie (218)
- Institut für Geowissenschaften (164)
- Extern (95)
- Institut für Chemie (83)
- Institut für Umweltwissenschaften und Geographie (79)
- Department Sport- und Gesundheitswissenschaften (64)
- Department Psychologie (49)
- Institut für Ernährungswissenschaft (46)
- Department Linguistik (40)
Anomalous diffusion with a power-law time dependence vertical bar R vertical bar(2)(t) similar or equal to t(alpha i) of the mean squared displacement occurs quite ubiquitously in numerous complex systems. Often, this anomalous diffusion is characterised by crossovers between regimes with different anomalous diffusion exponents alpha(i). Here we consider the case when such a crossover occurs from a first regime with alpha(1) to a second regime with alpha(2) such that alpha(2) > alpha(1), i.e., accelerating anomalous diffusion. A widely used framework to describe such crossovers in a one-dimensional setting is the bi-fractional diffusion equation of the so-called modified type, involving two time-fractional derivatives defined in the Riemann-Liouville sense. We here generalise this bi-fractional diffusion equation to higher dimensions and derive its multidimensional propagator (Green's function) for the general case when also a space fractional derivative is present, taking into consideration long-ranged jumps (Levy flights). We derive the asymptotic behaviours for this propagator in both the short- and long-time as well the short- and long-distance regimes. Finally, we also calculate the mean squared displacement, skewness and kurtosis in all dimensions, demonstrating that in the general case the non-Gaussian shape of the probability density function changes.
We extend standard models of work-related training by explicitly incorporating workers’ locus of control into the investment decision through the returns they expect. Our model predicts that higher internal control results in increased take-up of general, but not specific, training. This prediction is empirically validated using data from the German Socioeconomic Panel (SOEP). We provide empirical evidence that locus of control influences participation in training through its effect on workers’ expectations about future wage increases rather than actual wage increases. Our results provide an important explanation for underinvestment in training and suggest that those with an external sense of control may require additional training support.
In recent years, gravitational-wave astronomy has motivated increasingly accurate perturbative studies of gravitational dynamics in compact binaries. This in turn has enabled more detailed analyses of the dynamical black holes in these systems. For example, Pound et al. [Phys. Rev. Lett. 124, 021101 (2020)] recently computed the surface area of a Schwarzschild black hole's apparent horizon, perturbed by an orbiting body, to second order in the binary's mass ratio. In this paper, we take that as the starting point for a comprehensive study of a perturbed Schwarzschild black hole's apparent and event horizon at second perturbative order, deriving generic formulas for the first- and second-order corrections to the horizons' radial profiles, surface areas, Hawking masses, and intrinsic curvatures. We find that the two horizons are remarkably similar, and that any teleological behavior of the event horizon is suppressed in several ways. Critically, we establish that at all orders, the perturbed event horizon in a small-mass-ratio binary is effectively localized in time. Even more pointedly, the event horizon is identical to the apparent horizon at linear order regardless of the source of perturbation, implying that the seemingly teleological "tidal lead," previously observed in linearly perturbed event horizons, is not genuinely teleological in origin. The two horizons do generically differ at second order, but their Hawking masses remain identical, implying that the event horizon obeys the same energy-flux balance law as the apparent horizon. At least in the case of a binary system, the difference between their surface areas remains extremely small even in the late stages of inspiral. In the course of our analysis, we also numerically illustrate puzzling behavior in the black hole's motion around the binary's center of mass.
Vibrational dynamics of adsorbates near surfaces plays both an important role for applied surface science and as a model lab for studying fundamental problems of open quantum systems. We employ a previously developed model for the relaxation of a D-Si-Si bending mode at a D:Si(100)-(2 x 1) surface, induced by a "bath " of more than 2000 phonon modes [Lorenz and P. Saalfrank, Chem. Phys. 482, 69 (2017)], to extend previous work along various directions. First, we use a Hierarchical Effective Mode (HEM) model [Fischer et al., J. Chem. Phys. 153, 064704 (2020)] to study relaxation of higher excited vibrational states than hitherto done by solving a high-dimensional system-bath time-dependent Schrodinger equation (TDSE). In the HEM approach, (many) real bath modes are replaced by (much less) effective bath modes. Accordingly, we are able to examine scaling laws for vibrational relaxation lifetimes for a realistic surface science problem. Second, we compare the performance of the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) approach with that of the recently developed coherent-state-based multi-Davydov-D2 Ansatz [Zhou et al., J. Chem. Phys. 143, 014113 (2015)]. Both approaches work well, with some computational advantages for the latter in the presented context. Third, we apply open-system density matrix theory in comparison with basically "exact " solutions of the multi-mode TDSEs. Specifically, we use an open-system Liouville-von Neumann (LvN) equation treating vibration-phonon coupling as Markovian dissipation in Lindblad form to quantify effects beyond the Born-Markov approximation. Published under an exclusive license by AIP Publishing.
One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener. It uses a partition of the search space into a sequence of levels which are traversed by the algorithm in increasing order, possibly skipping levels. An easy, but often strong upper bound for the run time can then be derived by adding the reciprocals of the probabilities to leave the levels (or upper bounds for these). Unfortunately, a similarly effective method for proving lower bounds has not yet been established. The strongest such method, proposed by Sudholt (2013), requires a careful choice of the viscosity parameters gamma(i), j, 0 <= i < j <= n. In this paper we present two new variants of the method, one for upper and one for lower bounds. Besides the level leaving probabilities, they only rely on the probabilities that levels are visited at all. We show that these can be computed or estimated without greater difficulties and apply our method to reprove the following known results in an easy and natural way. (i) The precise run time of the (1+1) EA on LEADINGONES. (ii) A lower bound for the run time of the (1+1) EA on ONEMAX, tight apart from an O(n) term. (iii) A lower bound for the run time of the (1+1) EA on long k-paths (which differs slightly from the previous result due to a small error in the latter). We also prove a tighter lower bound for the run time of the (1+1) EA on jump functions by showing that, regardless of the jump size, only with probability O(2(-n)) the algorithm can avoid to jump over the valley of low fitness.
Aim: The continental-scale distribution of plant functional types, such as evergreen and summergreen needle-leaf forest, is assumed to be determined by contemporary climate. However, the distribution of summergreen needle-leaf forest of larch (Larix Mill.) differs markedly between the continents, despite relatively similar climatic conditions. The reasons for these differences are little understood. Our aim is to identify potential triggers and drivers of the current distribution patterns by comparing species' bioclimatic niches, glacial refugia and postglacial recolonization patterns.
Location: Northern hemisphere.
Taxon: Species of the genus Larix (Mill.).
Methods: We compare species distribution and dominance using species ranges and sites of dominance, as well as their occurrence on modelled permafrost extent, and active layer thickness (ALT). We compare the bioclimatic niches and calculate the niche overlap between species, using the same data in addition to modern climate data. We synthesize pollen, macrofossil and ancient DNA palaeo-evidence of past Larix occurrences of the last 60,000 years and track differences in distribution patterns through time.
Results: Bioclimatic niches show large overlaps between Asian larch species and American Larix laricina. The distribution across various degrees of permafrost extent is distinctly different for Asian L. gmelinii and L. cajanderi compared to the other species, whereas the distribution on different depths of ALT is more similar among Asian and American species. Northern glacial refugia for Larix are only present in eastern Asia and Alaska.
Main Conclusion: The dominance of summergreen larches in Asia, where evergreen conifers dominate most of the rest of the boreal forests, is dependent on the interaction of several factors which allows Asian L. gmelinii and L. cajanderi to dominate where these factors coincide. These factors include the early postglacial spread out of northern glacial refugia in the absence of competitors as well as a positive feedback mechanism between frozen ground and forest.
From gustiness to dustiness
(2022)
This study delivers the first empirical data-driven analysis of the impact of turbulence induced gustiness on the fine dust emissions from a measuring field. For quantification of the gust impact, a new measure, the Gust uptake Efficiency (GuE) is introduced. GuE provides a percentage of over- or under-proportional dust uptake due to gust activity during a wind event. For the three analyzed wind events, GuE values of up to 150% could be found, yet they significantly differed per particle size class with a tendency for lower values for smaller particles. In addition, a high-resolution correlation analysis among 31 particle size classes and wind speed was conducted; it revealed strong negative correlation coefficients for very small particles and positive correlations for bigger particles, where 5 mu m appears to be an empirical threshold dividing both directions. We conclude with a number of suggestions for further investigations: an optimized field experiment setup, a new particle size ratio (PM1/PM0.5 in addition to PM10/PM2.5), as well as a comprehensive data-driven search for an optimal wind gust definition in terms of soil erosivity.
A standard approach to accelerating shortest path algorithms on networks is the bidirectional search, which explores the graph from the start and the destination, simultaneously. In practice this strategy performs particularly well on scale-free real-world networks. Such networks typically have a heterogeneous degree distribution (e.g., a power-law distribution) and high clustering (i.e., vertices with a common neighbor are likely to be connected themselves). These two properties can be obtained by assuming an underlying hyperbolic geometry. <br /> To explain the observed behavior of the bidirectional search, we analyze its running time on hyperbolic random graphs and prove that it is (O) over tilde (n(2-1/alpha) + n(1/(2 alpha)) + delta(max)) with high probability, where alpha is an element of (1/2, 1) controls the power-law exponent of the degree distribution, and dmax is the maximum degree. This bound is sublinear, improving the obvious worst-case linear bound. Although our analysis depends on the underlying geometry, the algorithm itself is oblivious to it.
Resolving the grand challenges and wicked problems of the Anthropocene will require skillfully combining a broad range of knowledge and understandings-both scientific and non-scientific-of Earth systems and human societies. One approach to this is transdisciplinary research, which has gained considerable interest over the last few decades, resulting in an extensive body of literature about transdisciplinarity. However, this has in turn led to the challenge that developing a good understanding of transdisciplinary research can require extensive effort. Here we provide a focused overview and perspective for disciplinary and interdisciplinary researchers who are interested in efficiently obtaining a solid understanding of transdisciplinarity. We describe definitions, characteristics, schools of thought, and an exemplary three-phase model of transdisciplinary research. We also discuss three key challenges that transdisciplinary research faces in the context of addressing the broader challenges of the Anthropocene, and we consider approaches to dealing with these specific challenges, based especially on our experiences with building up transdisciplinary research projects at the Institute for Advanced Sustainability Studies.
The Covid-19 pandemic imposed new constraints on empirical research and forced researchers to transfer from traditional laboratory research to the online environment. This study tested the validity of a web-based episodic memory paradigm by comparing participants' memory performance for trustworthy and untrustworthy facial stimuli in a supervised laboratory setting and an unsupervised web setting. Consistent with previous results, we observed enhanced episodic memory for untrustworthy compared to trustworthy faces. Most importantly, this memory bias was comparable in the online and the laboratory experiment, suggesting that web-based procedures are a promising tool for memory research.
Social media and self-esteem
(2022)
The relationship between social media and self-esteem is complex, as studies tend to find a mixed pattern of relationships and meta-analyses tend to find small, albeit significant, magnitudes of statistical effects. One explanation is that social media use does not affect self-esteem for the majority of users, while small minorities experience either positive or negative effects, as evidenced by recent research calculating person specific within-person effects. This suggests that the true relationship between social media use and self-esteem is person-specific and based on individual susceptibilities and uses. In recognition of these advancements, we review recent empirical studies considering differential uses and moderating variables in the social media-self-esteem relationship, and conclude by discussing opportunities for future social media effects research.
The creation of building exposure models for seismic risk assessment is frequently challenging due to the lack of availability of detailed information on building structures. Different strategies have been developed in recent years to overcome this, including the use of census data, remote sensing imagery and volunteered graphic information (VGI). This paper presents the development of a building-by-building exposure model based exclusively on openly available datasets, including both VGI and census statistics, which are defined at different levels of spatial resolution and for different moments in time. The initial model stemming purely from building-level data is enriched with statistics aggregated at the neighbourhood and city level by means of a Monte Carlo simulation that enables the generation of full realisations of damage estimates when using the exposure model in the context of an earthquake scenario calculation. Though applicable to any other region of interest where analogous datasets are available, the workflow and approach followed are explained by focusing on the case of the German city of Cologne, for which a scenario earthquake is defined and the potential damage is calculated. The resulting exposure model and damage estimates are presented, and it is shown that the latter are broadly consistent with damage data from the 1978 Albstadt earthquake, notwithstanding the differences in the scenario. Through this real-world application we demonstrate the potential of VGI and open data to be used for exposure modelling for natural risk assessment, when combined with suitable knowledge on building fragility and accounting for the inherent uncertainties.
Uppsala and Berkeley
(2022)
The development of modern photoelectron spectroscopy is reviewed with a special focus on the importance of research at Uppsala University and at Berkeley. The influence of two pioneers, Kai Siegbahn and Dave Shirley, is underlined. Early interaction between the two centers helped to kick-start the field. Both laboratories have continued to play an important role in the field, both in terms of creating new experimental capabilities and developing the theoretical understanding of the spectroscopic processes.
Neoarchean (similar to 2.73-2.70 Ga) accretionary history of the eastern Dharwar Craton, India
(2022)
Cratonic mid-crustal plutons may contain supracrustal enclaves that preserve evidence of an earlier growth history. The Eastern Dharwar craton records Neoarchean two-stage accretionary sequential growth (2.70 and 2.55 Ga) and a chronology of their enclaves could refine orogenic models. To test whether the metamorphic history of their enclaves was related to any of these stages, phase equilibria modelling and combined Lu-Hf and Sm-Nd geochronology on garnet were conducted on metapsammite, now preserved as garnet-orthopyroxene-cordierite gneiss. Phase equilibria modelling indicates peak metamorphic conditions, similar to 850 degrees C and similar to 8.5 kbar (M1a), were followed by near isothermal decompression to 5-6 kbar (M1b) and isobaric cooling to similar to 800 degrees C (M1c). The thermobaric gradient related to peak metamorphic conditions, similar to 30 degrees C kbar(-1), is typical of collisional orogens. Regression of the whole-rock and garnet, for sample S17b, yield Lu-Hf isochron ages of 2733 +/- 29 Ma, and for sample S18, 2724 +/- 13 Ma. A Lu-Hf weighted mean age for the porphyroblastic garnet suggests growth at 2725.5 +/- 11.9 Ma during the M1a-M1b stages. In contrast, the whole-rock sample S17b and the garnet fractions yield a Sm-Nd isochron age of 2696 +/- 10 Ma. From sample S18 the whole rock, garnet fractions, and orthopyroxene yield an isochron age of 2683 +/- 15 Ma. The garnet Sm-Nd weighted mean age at 2692.0 +/- 8.3 Ma constrains the M1b-M1c stages. We suggest that the protoliths to these supracrustal enclaves were deposited in an arc tectonic setting and underwent thickening followed by heating during peeled-back lithospheric convergence. Therefore, the earliest of the craton-forming accretionary stages is preserved as the similar to 2.73 Ga granulite-facies enclaves, marginally older than the 2.70-2.65 Ga cratonic greenstone volcanism. Tectonic exhumation of these mid-crustal granulite enclaves was in response to the late-Proterozoic (similar to 1.7 Ga) Bhopalpatnam orogeny.
Magmatic continental rifts often constitute nascent plate boundaries, yet long-term extension rates and transient rate changes associated with these early stages of continental breakup remain difficult to determine. Here, we derive a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) of the East African Rift System for the last 0.5 m.y. We use the TanDEM-X science digital elevation model to evaluate fault-scarp geometries and determine fault throws across the volcano-tectonic axis of the inner graben of the NKR. Along rift-perpendicular profiles, amounts of cumulative extension are determined, and by integrating four new Ar-40/Ar-39 radiometric dates for the Silali volcano into the existing geochronology of the faulted volcanic units, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0-1.6 mm yr(-1), locally with values up to 2.0 mm yr(-1). A comparison with the decadal, geodetically determined extension rate reveals that at least 65% of the extension must be accommodated within a narrow, 20-km-wide zone of the inner rift. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The Dirac point of a topological surface state (TSS) is protected against gapping by time-reversal symmetry. Conventional wisdom stipulates, therefore, that only through magnetisation may a TSS become gapped. However, non-magnetic gaps have now been demonstrated in Bi2Se3 systems doped with Mn or In, explained by hybridisation of the Dirac cone with induced impurity resonances. Recent photoemission experiments suggest that an analogous mechanism applies even when Bi2Se3 is surface dosed with Au. Here, we perform a systematic spin- and angle-resolved photoemission study of Au-dosed Bi2Se3. Although there are experimental conditions wherein the TSS appears gapped due to unfavourable photoemission matrix elements, our photon-energy-dependent spectra unambiguously demonstrate the robustness of the Dirac cone against high Au coverage. We further show how the spin textures of the TSS and its accompanying surface resonances remain qualitatively unchanged following Au deposition, and discuss the mechanism underlying the suppression of the spectral weight.
Associations between measures of physical fitness and cognitive performance in preschool children
(2022)
Background:
Given that recent studies report negative secular declines in physical fitness, associations between fitness and cognition in childhood are strongly discussed. The preschool age is characterized by high neuroplasticity which effects motor skill learning, physical fitness, and cognitive development. The aim of this study was to assess the relation of physical fitness and attention (including its individual dimensions (quantitative, qualitative)) as one domain of cognitive performance in preschool children. We hypothesized that fitness components which need precise coordination compared to simple fitness components are stronger related to attention.
Methods:
Physical fitness components like static balance (i.e., single-leg stance), muscle strength (i.e., handgrip strength), muscle power (i.e., standing long jump), and coordination (i.e., hopping on one leg) were assessed in 61 healthy children (mean age 4.5 +/- 0.6 years; girls n = 30). Attention was measured with the "Konzentrations-Handlungsverfahren fur Vorschulkinder" [concentration-action procedure for preschoolers]). Analyses were adjusted for age, body height, and body mass.
Results:
Results from single linear regression analysis revealed a significant (p < 0.05) association between physical fitness (composite score) and attention (composite score) (standardized ss = 0.40), showing a small to medium effect (F-2 = 0.14). Further, coordination had a significant relation with the composite score and the quantitative dimension of attention (standardized ss = 0.35; p < 0.01; standardized ss = - 0.33; p < 0.05). Coordination explained about 11% (composite score) and 9% (quantitative dimension) of the variance in the stepwise multiple regression model.
Conclusion:
The results indicate that performance in physical fitness, particularly coordination, is related to attention in preschool children. Thus, high performance in complex fitness components (i.e., hopping on one leg) tends to predict attention in preschool children. Further longitudinal studies should focus on the effectiveness of physical activity programs implementing coordination and complex exercises at preschool age to examine cause-effect relationships between physical fitness and attention precisely.
Tautomerism is one of the most important forms of isomerism, owing to the facile interconversion between species and the large differences in chemical properties introduced by the proton transfer connecting the tautomers. Spectroscopic techniques are often used for the characterization of tautomers. In this context, separating the overlapping spectral response of coexisting tautomers is a long-standing challenge in chemistry. Here, we demonstrate that by using resonant inelastic X-ray scattering tuned to the core excited states at the site of proton exchange between tautomers one is able to experimentally disentangle the manifold of valence excited states of each tautomer in a mixture. The technique is applied to the prototypical keto-enol equilibrium of 3-hydroxypyridine in aqueous solution. We detect transitions from the occupied orbitals into the LUMO for each tautomer in solution, which report on intrinsic and hydrogen-bond-induced orbital polarization within the pi and sigma manifolds at the proton-transfer site.
Context.
Even after the Rosetta mission, some of the mechanical parameters of comet 67P/Churyumov-Gerasimenko's surface material are still not well constrained. They are needed to improve our understanding of cometary activity or for planning sample return procedures.
Aims.
We discuss the physical process dominating the formation of aeolian-like surface features in the form of moats and wind taillike bedforms around obstacles and investigate the mechanical and geometrical parameters involved.
Methods.
By applying the discrete element method (DEM) in a low-gravity environment, we numerically simulated the dynamics of the surface layer particles and the particle stream involved in the formation of aeolian-like morphological features. The material is composed of polydisperse spherical particles that consist of a mixture of dust and water ice, with interparticle forces given by the Hertz contact model, cohesion, friction, and rolling friction. We determined a working set of parameters that enables simulations to be reasonably realistic and investigated morphological changes when modifying these parameters.
Results.
The aeolian-like surface features are reasonably well reproduced using model materials with a tensile strength on the order of 0.1-1 Pa. Stronger materials and obstacles with round shapes impede the formation of a moat and a wind tail. The integrated dust flux required for the formation of moats and wind tails is on the order of 100 kg m(-2), which, based on the timescale of morphological changes inferred from Rosetta images, translates to a near-surface particle density on the order of 10(-6)-10(-4) kg m(-3).
Conclusions.
DEM modeling of the aeolian-like surface features reveals complex formation mechanisms that involve both deposition of ejected material and surface erosion. More numerical work and additional in situ measurements or sample return missions are needed to better investigate mechanical parameters of cometary surface material and to understand the mechanics of cometary activity.
In this study, we investigated retention intention and job satisfaction of 238 first-year alternatively certified (AC) teachers. Drawing on Organizational Socialization Theory, we tested the hypothesis that AC teacher extraversion and perceived school support are positively related to the two variables and mediated by self-efficacy. To test our hypothesis, we applied structural equation modeling. Our results demonstrate that extraversion and perceived social support are positively related to retention intentions and job satisfaction. In addition, self-efficacy serves as a mediator. The findings could help school administrators to better understand how to support and retain AC teachers and thus address teacher shortages.
Genetic engineering has provided humans the ability to transform organisms by direct manipulation of genomes within a broad range of applications including agriculture (e.g., GM crops), and the pharmaceutical industry (e.g., insulin production). Developments within the last 10 years have produced new tools for genome editing (e.g., CRISPR/Cas9) that can achieve much greater precision than previous forms of genetic engineering. Moreover, these tools could offer the potential for interventions on humans and for both clinical and non-clinical purposes, resulting in a broad scope of applicability. However, their promising abilities and potential uses (including their applicability in humans for either somatic or heritable genome editing interventions) greatly increase their potential societal impacts and, as such, have brought an urgency to ethical and regulatory discussions about the application of such technology in our society. In this article, we explore different arguments (pragmatic, sociopolitical and categorical) that have been made in support of or in opposition to the new technologies of genome editing and their impact on the debate of the permissibility or otherwise of human heritable genome editing interventions in the future. For this purpose, reference is made to discussions on genetic engineering that have taken place in the field of bioethics since the 1980s. Our analysis shows that the dominance of categorical arguments has been reversed in favour of pragmatic arguments such as safety concerns. However, when it comes to involving the public in ethical discourse, we consider it crucial widening the debate beyond such pragmatic considerations. In this article, we explore some of the key categorical as well sociopolitical considerations raised by the potential uses of heritable genome editing interventions, as these considerations underline many of the societal concerns and values crucial for public engagement. We also highlight how pragmatic considerations, despite their increasing importance in the work of recent authoritative sources, are unlikely to be the result of progress on outstanding categorical issues, but rather reflect the limited progress on these aspects and/or pressures in regulating the use of the technology.
Boredom has been identified as one of the greatest psychological challenges when staying at home during quarantine and isolation. However, this does not mean that the situation necessarily causes boredom. On the basis of 13 explorative interviews with bored and non-bored persons who have been under quarantine or in isolation, we explain why boredom is related to a subjective interpretation process rather than being a direct consequence of the objective situation. Specifically, we show that participants vary significantly in their interpretations of staying at home and, thus, also in their experience of boredom. While the non-bored participants interpret the situation as a relief or as irrelevant, the bored participants interpret it as a major restriction that only some are able to cope with.
As the use of free electron laser (FEL) sources increases, so do the findings mentioning non-linear phenomena occurring at these experiments, such as saturable absorption, induced transparency and scattering breakdowns. These are well known among the laser community, but are still rarely understood and expected among the X-ray community and to date lack tools and theories to accurately predict the respective experimental parameters and results. We present a simple theoretical framework to access short X-ray pulse induced light- matter interactions which occur at intense short X-ray pulses as available at FEL sources. Our approach allows to investigate effects such as saturable absorption, induced transparency and scattering suppression, stimulated emission, and transmission spectra, while including the density of state influence relevant to soft X-ray spectroscopy in, for example, transition metal complexes or functional materials. This computationally efficient rate model based approach is intuitively adaptable to most solid state sample systems in the soft X-ray spectrum with the potential to be extended for liquid and gas sample systems as well. The feasibility of the model to estimate the named effects and the influence of the density of state is demonstrated using the example of CoPd transition metal systems at the Co edge. We believe this work is an important contribution for the preparation, performance, and understanding of FEL based high intensity and short pulse experiments, especially on functional materials in the soft X-ray spectrum.
Microwave-Assisted Synthesis of 5 '-O-methacryloylcytidine Using the Immobilized Lipase Novozym 435
(2022)
Nucleobase building blocks have been demonstrated to be strong candidates when it comes to DNA/RNA-like materials by benefiting from hydrogen bond interactions as physical properties. Modifying at the 5 ' position is the simplest way to develop nucleobase-based structures by transesterification using the lipase Novozym 435. Herein, we describe the optimization of the lipase-catalyzed synthesis of the monomer 5 '-O-methacryloylcytidine with the assistance of microwave irradiation. Variable reaction parameters, such as enzyme concentration, molar ratio of the substrate, reaction temperature and reaction time, were investigated to find the optimum reaction condition in terms of obtaining the highest yield.
Here I present a comparison between two of the most widely used reduced-complexity models for the representation of sediment transport and deposition processes, namely the transport-limited (or TL) model and the under-capacity (or xi-q) model more recently developed by Davy and Lague (2009). Using both models, I investigate the behavior of a sedimentary continental system of length L fed by a fixed sedimentary flux from a catchment of size A(0) in a nearby active orogen through which sediments transit to a fixed base level representing a large river, a lake or an ocean. This comparison shows that the two models share the same steady-state solution, for which I derive a simple 1D analytical expression that reproduces the major features of such sedimentary systems: a steep fan that connects to a shallower alluvial plain. The resulting fan geometry obeys basic observational constraints on fan size and slope with respect to the upstream drainage area, A(0). The solution is strongly dependent on the size of the system, L, in comparison to a distance L-0, which is determined by the size of A(0), and gives rise to two fundamentally different types of sedimentary systems: a constrained system where L < L-0 and open systems where L > L-0. I derive simple expressions that show the dependence of the system response time on the system characteristics, such as its length, the size of the upstream catchment area, the amplitude of the incoming sedimentary flux and the respective rate parameters (diffusivity or erodibility) for each of the two models. I show that the xi-q model predicts longer response times. I demonstrate that although the manner in which signals propagates through the sedimentary system differs greatly between the two models, they both predict that perturbations that last longer than the response time of the system can be recorded in the stratigraphy of the sedimentary system and in particular of the fan. Interestingly, the xi-q model predicts that all perturbations in the incoming sedimentary flux will be transmitted through the system, whereas the TL model predicts that rapid perturbations cannot. I finally discuss why and under which conditions these differences are important and propose observational ways to determine which of the two models is most appropriate to represent natural systems.
Suppression of the TeV Pair-beam-Plasma Instability by a Tangled Weak Intergalactic Magnetic Field
(2022)
We study the effect of a tangled sub-fG level intergalactic magnetic field (IGMF) on the electrostatic instability of a blazar-induced pair beam. Sufficiently strong IGMF may significantly deflect the TeV pair beams, which would reduce the flux of secondary cascade emission below the observational limits. A similar flux reduction may result from the electrostatic beam-plasma instability, which operates the best in the absence of IGMF. Considering IGMF with correlation lengths smaller than a kiloparsec, we find that weak magnetic fields increase the transverse momentum of the pair-beam particles, which dramatically reduces the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. We show that the beam-plasma instability is eliminated as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission.
This study quantifies the distributional effects of the minimum wage introduced in Germany in 2015. Using detailed Socio-Economic Panel survey data, we assess changes in the hourly wages, working hours, and monthly wages of employees who were entitled to be paid the minimum wage. We employ a difference-in-differences analysis, exploiting regional variation in the “bite” of the minimum wage. At the bottom of the hourly wage distribution, we document wage growth of 9% in the short term and 21% in the medium term. At the same time, we find a reduction in working hours, such that the increase in hourly wages does not lead to a subortionate increase in monthly wages. We conclude that working hours adjustments play an important role in the distributional effects of minimum wages.
We examined the relationship between the mechanical strength of the lithosphere and the distribution of seismicity within the overriding continental plate of the southern Central Andes (SCA, 29 degrees-39 degrees S), where the oceanic Nazca Plate changes its subduction angle between 33 degrees S and 35 degrees S, from subhorizontal in the north (<5 degrees) to steep in the south (similar to 30 degrees). We computed the long-term lithospheric strength based on an existing 3D model describing variations in thickness, density, and temperature of the main geological units forming the lithosphere of the SCA and adjacent forearc and foreland regions. The comparison between our results and seismicity within the overriding plate (upper-plate seismicity) shows that most of the events occur within the modeled brittle domain of the lithosphere. The depth where the deformation mode switches from brittle frictional to thermally activated ductile creep provides a conservative lower bound to the seismogenic zone in the overriding plate of the study area. We also found that the majority of upper-plate earthquakes occurs within the realm of first-order contrasts in integrated strength (12.7-13.3 log Pam in the Andean orogen vs. 13.5-13.9 log Pam in the forearc and the foreland). Specific conditions characterize the mechanically strong northern foreland of the Andes, where seismicity is likely explained by the effects of slab steepening.
Draft Genome Sequence of Nocardioides alcanivorans NGK65(T), a Hexadecane-Degrading Bacterium
(2022)
The Gram-positive bacterium Nocardioides alcanivorans NGK65(T) was isolated from plastic-polluted soil and cultivated on medium with polyethylene as the single carbon source. Nanopore sequencing revealed the presence of candidate enzymes for the biodegradation of polyethylene. Here, we report the draft genome of this newly described member of the terrestrial plastisphere.
Modern plant cultivars often possess superior growth characteristics, but within a limited range of environmental conditions. Due to climate change, crops will be exposed to distressing abiotic conditions more often in the future, out of which heat stress is used as example for this study. To support identification of tolerant germplasm and advance screening techniques by a novel multivariate evaluation method, a diversity panel of 14 tomato genotypes, comprising Mediterranean landraces of Solanum lycopersicum, the cultivar "Moneymaker" and Solanum pennellii LA0716, which served as internal references, was assessed toward their tolerance against long-term heat stress. After 5 weeks of growth, young tomato plants were exposed to either control (22/18 degrees C) or heat stress (35/25 degrees C) conditions for 2 weeks. Within this period, water consumption, leaf angles and leaf color were determined. Additionally, gas exchange and leaf temperature were investigated. Finally, biomass traits were recorded. The resulting multivariate dataset on phenotypic plasticity was evaluated to test the hypothesis, that more tolerant genotypes have less affected phenotypes upon stress adaptation. For this, a cluster-analysis-based approach was developed that involved a principal component analysis (PCA), dimension reduction and determination of Euclidean distances. These distances served as measure for the phenotypic plasticity upon heat stress. Statistical evaluation allowed the identification and classification of homogeneous groups consisting each of four putative more or less heat stress tolerant genotypes. The resulting classification of the internal references as "tolerant" highlights the applicability of our proposed tolerance assessment model. PCA factor analysis on principal components 1-3 which covered 76.7% of variance within the phenotypic data, suggested that some laborious measure such as the gas exchange might be replaced with the determination of leaf temperature in larger heat stress screenings. Hence, the overall advantage of the presented method is rooted in its suitability of both, planning and executing screenings for abiotic stress tolerance using multivariate phenotypic data to overcome the challenge of identifying abiotic stress tolerant plants from existing germplasms and promote sustainable agriculture for the future.
Data recorded by distributed acoustic sensing (DAS) along an optical fibre sample the spatial and temporal properties of seismic wavefields at high spatial density. Often leading to massive amount of data when collected for seismic monitoring along many kilometre long cables. The spatially coherent signals from weak seismic arrivals within the data are often obscured by incoherent noise. We present a flexible and computationally efficient filtering technique, which makes use of the dense spatial and temporal sampling of the data and that can handle the large amount of data. The presented adaptive frequency-wavenumber filter suppresses the incoherent seismic noise while amplifying the coherent wavefield. We analyse the response of the filter in time and spectral domain, and we demonstrate its performance on a noisy data set that was recorded in a vertical borehole observatory showing active and passive seismic phase arrivals. Lastly, we present a performant open-source software implementation enabling real-time filtering of large DAS data sets.
The drug salinomycin (SAL) is a polyether antibiotic and used in veterinary medicine as coccidiostat and growth promoter. Recently, SAL was suggested as a potential anticancer drug. However, transformation products (TPs) resulting from metabolic and environmental degradation of SAL are incompletely known and structural information is missing. In this study, we therefore systematically investigated the formation and identification of SAL derived TPs using electrochemistry (EC) in an electrochemical reactor and rat and human liver microsome incubation (RLM and HLM) as TP generating methods. Liquid chromatography (LC) coupled to high-resolution mass spectrometry (HRMS) was applied to determine accurate masses in a suspected target analysis to identify TPs and to deduce occurring modification reactions of derived TPs. A total of 14 new, structurally different TPs were found (two EC-TPs, five RLM-TPs, and 11 HLM-TPs). The main modification reactions are decarbonylation for EC-TPs and oxidation (hydroxylation) for RLM/HLM-TPs. Of particular interest are potassium-based TPs identified after liver microsome incubation because these might have been overlooked or declared as oxidated sodium adducts in previous, non-HRMS-based studies due to the small mass difference between K and O + Na of 21 mDa. The MS fragmentation pattern of TPs was used to predict the position of identified modifications in the SAL molecule. The obtained knowledge regarding transformation reactions and novel TPs of SAL will contribute to elucidate SAL-metabolites with regards to structural prediction.
Nanostructured silicon and silicon-aluminum compounds are synthesized by a novel synthesis strategy based on spark plasma sintering (SPS) of silicon nanopowder, mesoporous silicon (pSi), and aluminum nanopowder. The interplay of metal-assisted crystallization and inherent porosity is exploited to largely suppress thermal conductivity. Morphology and temperature-dependent thermal conductivity studies allow us to elucidate the impact of porosity and nanostructure on the macroscopic heat transport. Analytic electron microscopy along with quantitative image analysis is applied to characterize the sample morphology in terms of domain size and interpore distance distributions. We demonstrate that nanostructured domains and high porosity can be maintained in densified mesoporous silicon samples. In contrast, strong grain growth is observed for sintered nanopowders under similar sintering conditions. We observe that aluminum agglomerations induce local grain growth, while aluminum diffusion is observed in porous silicon and dispersed nanoparticles. A detailed analysis of the measured thermal conductivity between 300 and 773 K allows us to distinguish the effect of reduced thermal conductivity caused by porosity from the reduction induced by phonon scattering at nanosized domains. With a modified Landauer/Lundstrom approach the relative thermal conductivity and the scattering length are extracted. The relative thermal conductivity confirms the applicability of Kirkpatrick's effective medium theory. The extracted scattering lengths are in excellent agreement with the harmonic mean of log-normal distributed domain sizes and the interpore distances combined by Matthiessen's rule.
Individuals with diabetes face higher risks for macro- and microvascular complications than their non-diabetic counterparts. The concept of precision medicine in diabetes aims to optimise treatment decisions for individual patients to reduce the risk of major diabetic complications, including cardiovascular outcomes, retinopathy, nephropathy, neuropathy and overall mortality. In this context, prognostic models can be used to estimate an individual's risk for relevant complications based on individual risk profiles. This review aims to place the concept of prediction modelling into the context of precision prognostics. As opposed to identification of diabetes subsets, the development of prediction models, including the selection of predictors based on their longitudinal association with the outcome of interest and their discriminatory ability, allows estimation of an individual's absolute risk of complications. As a consequence, such models provide information about potential patient subgroups and their treatment needs. This review provides insight into the methodological issues specifically related to the development and validation of prediction models for diabetes complications. We summarise existing prediction models for macro- and microvascular complications, commonly included predictors, and examples of available validation studies. The review also discusses the potential of non-classical risk markers and omics-based predictors. Finally, it gives insight into the requirements and challenges related to the clinical applications and implementation of developed predictions models to optimise medical decision making.
We revisited 10 known exoplanetary systems using publicly available data provided by the transiting exoplanet survey satellite (TESS). The sample presented in this work consists of short period transiting exoplanets, with inflated radii and large reported uncertainty on their planetary radii. The precise determination of these values is crucial in order to develop accurate evolutionary models and understand the inflation mechanisms of these systems. Aiming to evaluate the planetary radius measurement, we made use of the planet-to-star radii ratio, a quantity that can be measured during a transit event. We fit the obtained transit light curves of each target with a detrending model and a transit model. Furthermore, we used emcee, which is based on a Markov chain Monte Carlo approach, to assess the best fit posterior distributions of each system parameter of interest. We refined the planetary radius of WASP-140 b by approximately 12%, and we derived a better precision on its reported asymmetric radius uncertainty by approximately 86 and 67%. We also refined the orbital parameters of WASP-120 b by 2 sigma. Moreover, using the high-cadence TESS datasets, we were able to solve a discrepancy in the literature, regarding the planetary radius of the exoplanet WASP-93 b. For all the other exoplanets in our sample, even though there is a tentative trend that planetary radii of (near-) grazing systems have been slightly overestimated in the literature, the planetary radius estimation and the orbital parameters were confirmed with independent observations from space, showing that TESS and ground-based observations are overall in good agreement.
Emerging evidence has highlighted the important role of local contexts for integration trajectories of asylum seekers and refugees. Germany's policy of randomly allocating asylum seekers across Germany may advantage some and disadvantage others in terms of opportunities for equal participation in society. This study explores the question whether asylum seekers that have been allocated to rural areas experience disadvantages in terms of language acquisition compared to those allocated to urban areas. We derive testable assumptions using a Directed Acyclic Graph (DAG) which are then tested using large-N survey data (IAB-BAMF-SOEP refugee survey). We find that living in a rural area has no negative total effect on language skills. Further the findings suggest that the "null effect" is the result of two processes which offset each other: while asylum seekers in rural areas have slightly lower access for formal, federally organized language courses, they have more regular exposure to German speakers.
Sedentarism is a risk factor for depression and anxiety. People living with the human immunodeficiency virus (PLWH) have a higher prevalence of anxiety and depression compared to HIV-negative individuals. This cross-sectional study (n = 450, median age 44 (19-75), 7.3% females) evaluates the prevalence rates and prevalence ratio (PR) of anxiety and/or depression in PLWH associated with recreational exercise. A decreased likelihood of having anxiety (PR=0.57; 0.36-0.91; p = 0.01), depression (PR=0.41; 0.36-0.94; p=0.01), and comorbid anxiety and depression (PR = 0,43; 0.24-0.75; p=0.002) was found in exercising compared to non-exercising PLWH. Recreational exercise is associated with a lower risk for anxiety and/or depression. Further prospective studies are needed to provide insights on the direction of this association.
Using sodium chloride (NaCl) for de-icing roads is known to have severe consequences on freshwater organisms when washed into water bodies. N-(1,3-dimethylbutyl)-N '-phenyl-p-phenylenediamine, also known as 6PPD, is an antiozonant mainly found in automobile tire rubber to prevent ozone mediated cracking or wear-out. Especially the ozonated derivate, 6PPD-quinone, which is washed into streams after storm events, has been found to be toxic for coho salmon. Studies on other freshwater organisms could not confirm those findings, pointing towards distinct species-specific differences. Storm events result in greater run-offs from all water-soluble contaminants into freshwater bodies, potentially enhancing the concentrations of both chloride and 6PPD during winter. Here we show that these two contaminants have synergistic negative effects on the population growth of the rotifer Brachionus calyciflorus, a common freshwater herbivore. Hence, while only high concentrations of 6PPD and even higher concentrations of 6PPD-quinone, beyond environmentally relevant concentrations, had lethal effects on rotifers, the addition of NaCl enhanced the sensitivity of the rotifers towards the application of 6PPD so that their negative effects were more pronounced at lower concentrations. Similarly, 6PPD increased the lethal effect of NaCl. Our results support the species-specific toxicity of 6PPD and demonstrate a synergistic effect of the antiozonant on the toxicity of other environmentally relevant stressors, such as road salt contamination.
Randomised one-step time integration methods for deterministic operator differential equations
(2022)
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al. (Stat Comput 27(4):1065-1082, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
In this work, it is revealed how the photoinduced deformation of azobenzene containing polymers relates to the local direction of optomechanical stresses generated during irradiation with interference patterns (IPs). It can be substantiated by the modeling approach proposed by Saphiannikova et al., which describes the directional photodeformations in glassy side-chain azobenzene polymers, and proves that these deformations arise from the reorientation of rigid backbone segments along the light polarization direction. In experiments and modeling, surface relief gratings in pre-elongated photosensitive colloids of few micrometers length are inscribed using different IPs such as SS, PP, +/- 45, SP, RL, and LR. The deformation of colloidal particles is studied in situ, whereby the local variation of polymer topography is assigned to the local distribution of the electrical field vector for all IPs. Experimentally observed shapes are reproduced exactly with modeling azopolymer samples as visco-plastic bodies in the finite element software ANSYS. Orientation approach correctly predicts local variations of the main axis of light-induced stress in each interference pattern for both initially isotropic and highly oriented materials. With this work, it is suggested that the orientation approach implements a self-sufficient and convincing mechanism to describe photoinduced deformation in azopolymer films that in principle does not require auxiliary assumptions.
In this work, the fabrication and characterization of a simple, inexpensive, and effective microfluidic paper analytic device (mu PAD) for monitoring DNA samples is reported. The glass microfiber-based chip has been fabricated by a new wax-based transfer-printing technique and an electrode printing process. It is capable of moving DNA effectively in a time-dependent fashion. The nucleic acid sample is not damaged by this process and is accumulated in front of the anode, but not directly on the electrode. Thus, further DNA processing is feasible. The system allows the DNA to be purified by separating it from other components in sample mixtures such as proteins. Furthermore, it is demonstrated that DNA can be moved through several layers of the glass fiber material. This proof of concept will provide the basis for the development of rapid test systems, e.g., for the detection of pathogens in water samples.
Wages and wage dynamics directly affect individuals' and families' daily lives. In this article, we show how major theoretical branches of research on wages and inequality-that is, cumulative advantage (CA), human capital theory, and the lifespan perspective-can be integrated into a coherent statistical framework and analyzed with multilevel dynamic structural equation modeling (DSEM). This opens up a new way to empirically investigate the mechanisms that drive growing inequality over time. We demonstrate the new approach by making use of longitudinal, representative U.S. data (NLSY-79). Analyses revealed fundamental between-person differences in both initial wages and autoregressive wage growth rates across the lifespan. Only 0.5% of the sample experienced a "strict" CA and unbounded wage growth, whereas most individuals revealed logarithmic wage growth over time. Adolescent intelligence and adult educational levels explained substantial heterogeneity in both parameters. We discuss how DSEM may help researchers study CA processes and related developmental dynamics, and we highlight the extensions and limitations of the DSEM framework.
The biodiversity of tundra areas in northern high latitudes is threatened by invasion of forests under global warming. However, poorly understood nonlinear responses of the treeline ecotone mean the timing and extent of tundra losses are unclear, but policymakers need such information to optimize conservation efforts. Our individual-based model LAVESI, developed for the Siberian tundra-taiga ecotone, can help improve our understanding. Consequently, we simulated treeline migration trajectories until the end of the millennium, causing a loss of tundra area when advancing north. Our simulations reveal that the treeline follows climate warming with a severe, century-long time lag, which is overcompensated by infilling of stands in the long run even when temperatures cool again. Our simulations reveal that only under ambitious mitigation strategies (relative concentration pathway 2.6) will ~30% of original tundra areas remain in the north but separated into two disjunct refugia.
The increase in the performance of organic solar cells observed over the past few years has reinvigorated the search for a deeper understanding of the loss and extraction processes in this class of device. A detailed knowledge of the density of free charge carriers under different operating conditions and illumination intensities is a prerequisite to quantify the recombination and extraction dynamics. Differential charging techniques are a promising approach to experimentally obtain the charge carrier density under the aforementioned conditions. In particular, the combination of transient photovoltage and photocurrent as well as impedance and capacitance spectroscopy have been successfully used in past studies to determine the charge carrier density of organic solar cells. In this Tutorial, these experimental techniques will be discussed in detail, highlighting fundamental principles, practical considerations, necessary corrections, advantages, drawbacks, and ultimately their limitations. Relevant references introducing more advanced concepts will be provided as well. Therefore, the present Tutorial might act as an introduction and guideline aimed at new prospective users of these techniques as well as a point of reference for more experienced researchers. Published under an exclusive license by AIP Publishing.
A reliable estimation of flood impacts enables meaningful flood risk management and rapid assessments of flood impacts shortly after a flood. The flood in 2021 in Central Europe and the analysis of its impacts revealed that these estimations are still inadequate. Therefore, we investigate the influence of different data sets and methods aiming to improve flood impact estimates. We estimated economic flood impacts to private households and companies for a flood event in 2013 in Germany using (a) two different flood maps, (b) two approaches to map exposed objects based on OpenStreetMap and the Basic European Asset Map, (c) two different approaches to estimate asset values, and (d) tree-based models and Stage-Damage-Functions to describe the vulnerability. At the macro scale, water masks lead to reasonable impact estimations. At the micro and meso-scale, the identification of affected objects by means of water masks is insufficient leading to unreliable estimations. The choice of exposure data sets is most influential on the estimations. We find that reliable impact estimations are feasible with reported numbers of flood-affected objects from the municipalities. We conclude that more effort should be put in the investigation of different exposure data sets and the estimation of asset values. Furthermore, we recommend the establishment of a reporting system in the municipalities for a fast identification of flood-affected objects shortly after an event.
Cr(CO)(6) was investigated by X-ray absorption spectroscopy. The spectral signature at the metal edge provides information about the back-bonding of the metal in this class of complexes. Among the processes it participates in is ligand substitution in which a carbonyl ligand is ejected through excitation to a metal to ligand charge transfer (MLCT) band. The unsaturated carbonyl Cr(CO)(5) is stabilized by solution media in square pyramidal geometry and further reacts with the solvent. Multi-site-specific probing after photoexcitation was used to investigate the ligand substitution photoreaction process which is a common first step in catalytic processes involving metal carbonyls. The data were analysed with the aid of TD-DFT computations for different models of photoproducts and signatures for ligand rearrangement after substitution were found. The rearrangement was found to occur in about 790 ps in agreement with former studies of the photoreaction.
Can we rely on computational methods to accurately analyze complex texts? To answer this question, we compared different dictionary and scaling methods used in predicting the sentiment of German literature reviews to the "gold standard " of human-coded sentiments. Literature reviews constitute a challenging text corpus for computational analysis as they not only contain different text levels-for example, a summary of the work and the reviewer's appraisal-but are also characterized by subtle and ambiguous language elements. To take the nuanced sentiments of literature reviews into account, we worked with a metric rather than a dichotomous scale for sentiment analysis. The results of our analyses show that the predicted sentiments of prefabricated dictionaries, which are computationally efficient and require minimal adaption, have a low to medium correlation with the human-coded sentiments (r between 0.32 and 0.39). The accuracy of self-created dictionaries using word embeddings (both pre-trained and self-trained) was considerably lower (r between 0.10 and 0.28). Given the high coding intensity and contingency on seed selection as well as the degree of data pre-processing of word embeddings that we found with our data, we would not recommend them for complex texts without further adaptation. While fully automated approaches appear not to work in accurately predicting text sentiments with complex texts such as ours, we found relatively high correlations with a semiautomated approach (r of around 0.6)-which, however, requires intensive human coding efforts for the training dataset. In addition to illustrating the benefits and limits of computational approaches in analyzing complex text corpora and the potential of metric rather than binary scales of text sentiment, we also provide a practical guide for researchers to select an appropriate method and degree of pre-processing when working with complex texts.
Agriculture in India accounts for 18% of greenhouse gas (GHG) emissions and uses significant land and water. Various socioeconomic factors and food subsidies influence diets in India. Indian food systems face the challenge of sustainably nourishing the 1.3 billion population. However, existing studies focus on a few food system components, and holistic analysis is still missing. We identify Indian food systems covering six food system components: food consumption, production, processing, policy, environmental footprints, and socioeconomic factors from the latest Indian household consumer expenditure survey. We identify 10 Indian food systems using k-means cluster analysis on 15 food system indicators belonging to the six components. Based on the major source of calorie intake, we classify the ten food systems into production-based (3), subsidy-based (3), and market-based (4) food systems. Home-produced and subsidized food contribute up to 2000 kcal/consumer unit (CU)/day and 1651 kcal/CU/day, respectively, in these food systems. The calorie intake of 2158 to 3530 kcal/CU/day in the food systems reveals issues of malnutrition in India. Environmental footprints are commensurate with calorie intake in the food systems. Embodied GHG, land footprint, and water footprint estimates range from 1.30 to 2.19 kg CO(2)eq/CU/day, 3.89 to 6.04 m(2)/CU/day, and 2.02 to 3.16 m(3)/CU/day, respectively. Our study provides a holistic understanding of Indian food systems for targeted nutritional interventions on household malnutrition in India while also protecting planetary health.
Fifteen N-butylpyridinium salts - five monometallic [C4Py](2)[MBr4] and ten bimetallic [C4Py](2)[(M0.5M0.5Br4)-M-a-Br-b] (M=Co, Cu, Mn, Ni, Zn) - were synthesized, and their structures and thermal and electrochemical properties were studied. All the compounds are ionic liquids (ILs) with melting points between 64 and 101 degrees C. Powder and single-crystal X-ray diffraction show that all ILs are isostructural. The electrochemical stability windows of the ILs are between 2 and 3 V. The conductivities at room temperature are between 10(-5) and 10(-6) S cm(-1). At elevated temperatures, the conductivities reach up to 10(-4) S cm(-1) at 70 degrees C. The structures and properties of the current bromide-based ILs were also compared with those of previous examples using chloride ligands, which illustrated differences and similarities between the two groups of ILs.
How does a systematic time-dependence of the diffusion coefficient D(t) affect the ergodic and statistical characteristics of fractional Brownian motion (FBM)? Here, we answer this question via studying the characteristics of a set of standard statistical quantifiers relevant to single-particle-tracking (SPT) experiments. We examine, for instance, how the behavior of the ensemble- and time-averaged mean-squared displacements-denoted as the standard MSD < x(2)(Delta)> and TAMSD <<(delta(2)(Delta))over bar>> quantifiers-of FBM featuring < x(2) (Delta >> = <<(delta(2)(Delta >)over bar>> proportional to Delta(2H) (where H is the Hurst exponent and Delta is the [lag] time) changes in the presence of a power-law deterministically varying diffusivity D-proportional to(t) proportional to t(alpha-1) -germane to the process of scaled Brownian motion (SBM)-determining the strength of fractional Gaussian noise. The resulting compound "scaled-fractional" Brownian motion or FBM-SBM is found to be nonergodic, with < x(2)(Delta >> proportional to Delta(alpha+)(2H)(-1) and <(delta 2(Delta >) over bar > proportional to Delta(2H). We also detect a stalling behavior of the MSDs for very subdiffusive SBM and FBM, when alpha + 2H - 1 < 0. The distribution of particle displacements for FBM-SBM remains Gaussian, as that for the parent processes of FBM and SBM, in the entire region of scaling exponents (0 < alpha < 2 and 0 < H < 1). The FBM-SBM process is aging in a manner similar to SBM. The velocity autocorrelation function (ACF) of particle increments of FBM-SBM exhibits a dip when the parent FBM process is subdiffusive. Both for sub- and superdiffusive FBM contributions to the FBM-SBM process, the SBM exponent affects the long-time decay exponent of the ACF. Applications of the FBM-SBM-amalgamated process to the analysis of SPT data are discussed. A comparative tabulated overview of recent experimental (mainly SPT) and computational datasets amenable for interpretation in terms of FBM-, SBM-, and FBM-SBM-like models of diffusion culminates the presentation. The statistical aspects of the dynamics of a wide range of biological systems is compared in the table, from nanosized beads in living cells, to chromosomal loci, to water diffusion in the brain, and, finally, to patterns of animal movements.