Refine
Year of publication
- 2020 (1230) (remove)
Document Type
- Article (1230) (remove)
Language
- English (1230) (remove)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- climate change (17)
- machine learning (11)
- Germany (9)
- dynamics (9)
- model (9)
- performance (8)
- COVID-19 (7)
Institute
- Institut für Physik und Astronomie (175)
- Institut für Biochemie und Biologie (158)
- Institut für Geowissenschaften (140)
- Institut für Chemie (101)
- Institut für Mathematik (73)
- Department Psychologie (66)
- Institut für Umweltwissenschaften und Geographie (63)
- Department Sport- und Gesundheitswissenschaften (52)
- Institut für Ernährungswissenschaft (48)
- Department Linguistik (41)
Scholars have long recognised the importance of contexts of reception in shaping the integration of immigrants and refugees in a host society. Studies of refugees, in particular, have examined groups where the different dimensions of reception (government, labour market, and ethnic community) have been largely positive. How important is this merging of positive contexts across dimensions of reception? We address this through a comparative study of Vietnamese refugees to West Germany beginning in 1979 and contract workers to East Germany beginning in 1980. These two migration streams converged when Germany reunified in 1990. Drawing on mixed qualitative methods, this paper offers a strategic case for understanding factors that shape the resettlement experiences of Vietnamese refugees and immigrants in Germany. By comparing two migration streams from the same country of origin, but with different backgrounds and contexts of reception, we suggest that ethnic networks may, in time, offset the disadvantages of a negative government reception.
When participants in an experiment have to name pictures while ignoring distractor words superimposed on the picture or presented auditorily (i.e., picture-word interference paradigm), they take more time when the word to be named (or target) and distractor words are from the same semantic category (e.g., cat-dog). This experimental effect is known as the semantic interference effect, and is probably one of the most studied in the language production literature. The functional origin of the effect and the exact conditions in which it occurs are however still debated. Since Lupker (1979) reported the effect in the first response time experiment about 40 years ago, more than 300 similar experiments have been conducted. The semantic interference effect was replicated in many experiments, but several studies also reported the absence of an effect in a subset of experimental conditions. The aim of the present study is to provide a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces. The results are discussed in the light of current debates about the functional origin of the semantic interference effect and its implications for our understanding of the language production system.
This study focuses on the ability of the adult sound system to reorganise as a result of experience. Participants were exposed to existing and novel syllables in either a listening task or a production task over the course of two days. On the third day, they named disyllabic pseudowords while their electroencephalogram was recorded. The first syllable of these pseudowords had either been trained in the auditory modality, trained in production or had not been trained. The EEG response differed between existing and novel syllables for untrained but not for trained syllables, indicating that training novel sound sequences modifies the processes involved in the production of these sequences to make them more similar to those underlying the production of existing sound sequences. Effects of training on the EEG response were observed both after production training and mere auditory exposure.
We elaborate upon the theoretical foundations of a metric temporal extension of Answer Set Programming. In analogy to previous extensions of ASP with constructs from Linear Temporal and Dynamic Logic, we accomplish this in the setting of the logic of Here-and-There and its non-monotonic extension, called Equilibrium Logic. More precisely, we develop our logic on the same semantic underpinnings as its predecessors and thus use a simple time domain of bounded time steps. This allows us to compare all variants in a uniform framework and ultimately combine them in a common implementation.
Eclingo
(2020)
We describe eclingo, a solver for epistemic logic programs under Gelfond 1991 semantics built upon the Answer Set Programming system clingo. The input language of eclingo uses the syntax extension capabilities of clingo to define subjective literals that, as usual in epistemic logic programs, allow for checking the truth of a regular literal in all or in some of the answer sets of a program. The eclingo solving process follows a guess and check strategy. It first generates potential truth values for subjective literals and, in a second step, it checks the obtained result with respect to the cautious and brave consequences of the program. This process is implemented using the multi-shot functionalities of clingo. We have also implemented some optimisations, aiming at reducing the search space and, therefore, increasing eclingo 's efficiency in some scenarios. Finally, we compare the efficiency of eclingo with two state-of-the-art solvers for epistemic logic programs on a pair of benchmark scenarios and show that eclingo generally outperforms their obtained results.
In this paper, we study the problem of formal verification for Answer Set Programming (ASP), namely, obtaining aformal proofshowing that the answer sets of a given (non-ground) logic programPcorrectly correspond to the solutions to the problem encoded byP, regardless of the problem instance. To this aim, we use a formal specification language based on ASP modules, so that each module can be proved to capture some informal aspect of the problem in an isolated way. This specification language relies on a novel definition of (possibly nested, first order)program modulesthat may incorporate local hidden atoms at different levels. Then,verifyingthe logic programPamounts to prove some kind of equivalence betweenPand its modular specification.
According to established understanding, deep-water formation in the North Atlantic and Southern Ocean keeps the deep ocean cold, counter-acting the downward mixing of heat from the warmer surface waters in the bulk of the world ocean. Therefore, periods of strong Atlantic meridional overturning circulation (AMOC) are expected to coincide with cooling of the deep ocean and warming of the surface waters. It has recently been proposed that this relation may have reversed due to global warming, and that during the past decades a strong AMOC coincides with warming of the deep ocean and relative cooling of the surface, by transporting increasingly warmer waters downward. Here we present multiple lines of evidence, including a statistical evaluation of the observed global mean temperature, ocean heat content, and different AMOC proxies, that lead to the opposite conclusion: even during the current ongoing global temperature rise a strong AMOC warms the surface. The observed weakening of the AMOC has therefore delayed global surface warming rather than enhancing it.
Social Media Abstract:
The overturning circulation in the Atlantic Ocean has weakened in response to global warming, as predicted by climate models. Since it plays an important role in transporting heat, nutrients and carbon, a slowdown will affect global climate processes and the global mean temperature. Scientists have questioned whether this slowdown has worked to cool or warm global surface temperatures. This study analyses the overturning strength and global mean temperature evolution of the past decades and shows that a slowdown acts to reduce the global mean temperature. This is because a slower overturning means less water sinks into the deep ocean in the subpolar North Atlantic. As the surface waters are cold there, the sinking normally cools the deep ocean and thereby indirectly warms the surface, thus less sinking implies less surface warming and has a cooling effect. For the foreseeable future, this means that the slowing of the overturning will likely continue to slightly reduce the effect of the general warming due to increasing greenhouse gas concentrations.
Employing extensive Monte Carlo computer simulations, we investigate in detail the properties of multichain adsorption of charged flexible polyelectrolytes (PEs) onto oppositely charged spherical nanoparticles (SNPs). We quantify the conditions of critical adsorption-the phase-separation curve between the adsorbed and desorbed states of the PEs-as a function of the SNP surface-charge density and the concentration of added salt. We study the degree of fluctuations of the PE-SNP electrostatic binding energy, which we use to quantify the emergence of the phase subtransitions, including a series of partially adsorbed PE configurations. We demonstrate how the phase-separation adsorption-desorption boundary shifts and splits into multiple subtransitions at low-salt conditions, thereby generalizing and extending the results for critical adsorption of a single PE onto the SNP. The current findings are relevant for finite concentrations of PEs around the attracting SNP, such as the conditions for PE adsorption onto globular proteins carrying opposite electric charges.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
From an active labor market policy perspective, start-up subsidies for unemployed individuals are very effective in improving long-term labor market outcomes for participants. From a business perspective, however, the assessment of these public programs is less clear since they might attract individuals with low entrepreneurial abilities and produce businesses with low survival rates and little contribution to job creation, economic growth, and innovation. In this paper, we use a rich data set to compare participants of a German start-up subsidy program for unemployed individuals to a group of regular founders who started from non-unemployment and did not receive the subsidy. The data allows us to analyze their business performance up until 40 months after business formation. We find that formerly subsidized founders lag behind not only in survival and job creation, but especially also in innovation activities. The gaps in these business outcomes are relatively constant or even widening over time. Hence, we do not see any indication of catching up in the longer run. While the gap in survival can be entirely explained by initial differences in observable start-up characteristics, the gap in business development remains and seems to be the result of restricted access to capital as well as differential business strategies and dynamics. Considering these conflicting results for the assessment of the subsidy program from an ALMP and business perspective, policy makers need to carefully weigh the costs and benefits of such a strategy to find the right policy mix.
Background:
The literature on start-up subsidies (SUS) for the unemployed finds positive effects on objective outcome measures such as employment or income. However, little is known about effects on subjective well-being of participants. Knowledge about this is especially important because subsidizing the transition into self-employment may have unintended adverse effects on participants’ well-being due to its risky nature and lower social security protection, especially in the long run.
Objective:
We study the long-term effects of SUS on subjective outcome indicators of well-being, as measured by the participants’ satisfaction in different domains. This extends previous analyses of the current German SUS program (“Gründungszuschuss”) that focused on objective outcomes—such as employment and income—and allows us to make a more complete judgment about the overall effects of SUS at the individual level.
Research design:
Having access to linked administrative-survey data providing us with rich information on pretreatment characteristics, we base our analysis on the conditional independence assumption and use propensity score matching to estimate causal effects within the potential outcomes framework. We perform several sensitivity analyses to inspect the robustness of our findings.
Results:
We find long-term positive effects on job satisfaction but negative effects on individuals’ satisfaction with their social security situation. Supplementary findings suggest that the negative effect on satisfaction with social security may be driven by negative effects on unemployment and retirement insurance coverage. Our heterogeneity analysis reveals substantial variation in effects across gender, age groups, and skill levels. Estimates are highly robust.
Forested areas are assumed not to be influenced by erosion processes. However, forest soils of Northern Germany in a hummocky ground moraine landscape can sometimes exhibit a very shallow thickness on crest positions and buried soils on slope positions. The question consequently is: Are these on-going or ancient erosional and depositional processes? Plutonium isotopes act as soil erosion/deposition tracers for recent (last few decades) processes. Here, we quantified the 239+240PU inventories in a small, forested catchment (ancient forest "Melzower Forst", deciduous trees), which is characterised by a hummocky terrain including a kettle hole. Soil development depths (depth to C horizon) and 239+240PU inventories along a catena of sixteen different profiles were determined and correlated to relief parameters. Moreover, we compared different modelling approaches to derive erosion rates from Pu data. <br /> We find a strong relationship between soil development depths, distance-to-sink and topography along the catena. Fully developed Retisols (thicknesses > 1 m) in the colluvium overlay old land surfaces as documented by fossil Ah horizons. However, we found no relationship of Pu-based erosion rates to any relief parameter. Instead, 239+240PU inventories showed a very high local, spatial variability (36-70 Bq m(-2)). Low annual rainfall, spatially distributed interception and stem flow might explain the high variability of the 239+240PU inventories, giving rise to a patchy input pattern. Different models resulted in quite similar erosion and deposition rates (max: -5 t ha(-1) yr(-1) to +7.3 t ha(-1) yr(-1)). Although some rates are rather high, the magnitude of soil erosion and deposition - in terms of soil thickness change - is negligible during the last 55 years. The partially high values are an effect of the patchy Pu deposition on the forest floor. This forest has been protected for at least 240 years. Therefore rather natural events and anthropogenic activities during medieval times or even earlier must have caused the observed soil pattern, which documents strong erosion and deposition processes.
African weakly electric fish of the mormyrid genus Campylomormyrus generate pulse-type electric organ discharges (EODs) for orientation and communication. Their pulse durations are species-specific and elongated EODs are a derived trait. So far, differential gene expression among tissue-specific transcriptomes across species with different pulses and point mutations in single ion channel genes indicate a relation of pulse duration and electrocyte geometry/excitability. However, a comprehensive assessment of expressed Single Nucleotide Polymorphisms (SNPs) throughout the entire transcriptome of African weakly electric fish, with the potential to identify further genes influencing EOD duration, is still lacking. This is of particular value, as discharge duration is likely based on multiple cellular mechanisms and various genes. Here we provide the first transcriptome-wide SNP analysis of African weakly electric fish species (genus Campylomormyrus) differing by EOD duration to identify candidate genes and cellular mechanisms potentially involved in the determination of an elongated discharge of C. tshokwe. Non-synonymous substitutions specific to C. tshokwe were found in 27 candidate genes with inferred positive selection among Campylomormyrus species. These candidate genes had mainly functions linked to transcriptional regulation, cell proliferation and cell differentiation. Further, by comparing gene annotations between C. compressirostris (ancestral short EOD) and C. tshokwe (derived elongated EOD), we identified 27 GO terms and 2 KEGG pathway categories for which C. tshokwe significantly more frequently exhibited a species-specific expressed substitution than C. compressirostris. The results indicate that transcriptional regulation as well cell proliferation and differentiation take part in the determination of elongated pulse durations in C. tshokwe. Those cellular processes are pivotal for tissue morphogenesis and might determine the shape of electric organs supporting the observed correlation between electrocyte geometry/tissue structure and discharge duration. The inferred expressed SNPs and their functional implications are a valuable resource for future investigations on EOD durations.
Pollen records from Siberia are mostly absent in global or Northern Hemisphere synthesis works. Here we present a taxonomically harmonized and temporally standardized pollen dataset that was synthesized using 173 palynological records from Siberia and adjacent areas (northeastern Asia, 42-75 degrees N, 50-180 degrees E). Pollen data were taxonomically harmonized, i.e. the original 437 taxa were assigned to 106 combined pollen taxa. Age-depth models for all records were revised by applying a constant Bayesian age-depth modelling routine. The pollen dataset is available as count data and percentage data in a table format (taxa vs. samples), with age information for each sample. The dataset has relatively few sites covering the last glacial period between 40 and 11.5 ka (calibrated thousands of years before 1950 CE) particularly from the central and western part of the study area. In the Holocene period, the dataset has many sites from most of the area, with the exception of the central part of Siberia. Of the 173 pollen records, 81 % of pollen counts were downloaded from open databases (GPD, EPD, PANGAEA) and 10 % were contributions by the original data gatherers, while a few were digitized from publications. Most of the pollen records originate from peatlands (48 %) and lake sediments (33 %). Most of the records (83 %) have >= 3 dates, allowing the establishment of reliable chronologies. The dataset can be used for various purposes, including pollen data mapping (example maps for Larix at selected time slices are shown) as well as quantitative climate and vegetation reconstructions. The datasets for pollen counts and pollen percentages are available at https://doi.org/10.1594/PANGAEA.898616 (Cao et al., 2019a), also including the site information, data source, original publication, dating data, and the plant functional type for each pollen taxa.
The escape from a potential well is an archetypal problem in the study of stochastic dynamical systems, representing real-world situations from chemical reactions to leaving an established home range in movement ecology. Concurrently, Levy noise is a well-established approach to model systems characterized by statistical outliers and diverging higher order moments, ranging from gene expression control to the movement patterns of animals and humans. Here, we study the problem of Levy noise-driven escape from an almost rectangular, arctangent potential well restricted by two absorbing boundaries, mostly under the action of the Cauchy noise. We unveil analogies of the observed transient dynamics to the general properties of stationary states of Levy processes in single-well potentials. The first-escape dynamics is shown to exhibit exponential tails. We examine the dependence of the escape on the shape parameters, steepness, and height of the arctangent potential. Finally, we explore in detail the behavior of the probability densities of the first-escape time and the last-hitting point.
Complexes between the anionic polyelectrolyte sodium polyacrylate (PA) and an oppositely charged divalent azobenzene dye are prepared in aqueous solution. Depending on the ratio between dye and polyelectrolyte stable aggregates with a well-defined spherical shape are observed. Upon exposure of these complexes to UV light, the trans -> cis transition of the azobenzene is excited resulting in a better solubility of the dye and a dissolution of the complexes. The PA chains reassemble into well-defined aggregates when the dye is allowed to relax back into the trans isomer. Varying the temperature during this reformation step has a direct influence on the final size of the aggregates rendering temperature in an efficient way to easily change the size of the self-assemblies. Application of time-resolved small-angle neutron scattering (SANS) to study the structure formation reveals that the cis -> trans isomerization is the rate-limiting step followed by a nucleation and growth process.
Preface
(2020)
An independency (cliquy) tree of an n-vertex graph G is a spanning tree of G in which the set of leaves induces an independent set (clique). We study the problems of minimizing or maximizing the number of leaves of such trees, and fully characterize their parameterized complexity. We show that all four variants of deciding if an independency/cliquy tree with at least/most l leaves exists parameterized by l are either Para-NP- or W[1]-hard. We prove that minimizing the number of leaves of a cliquy tree parameterized by the number of internal vertices is Para-NP-hard too. However, we show that minimizing the number of leaves of an independency tree parameterized by the number k of internal vertices has an O*(4(k))-time algorithm and a 2k vertex kernel. Moreover, we prove that maximizing the number of leaves of an independency/cliquy tree parameterized by the number k of internal vertices both have an O*(18(k))-time algorithm and an O(k 2(k)) vertex kernel, but no polynomial kernel unless the polynomial hierarchy collapses to the third level. Finally, we present an O(3(n) . f(n))-time algorithm to find a spanning tree where the leaf set has a property that can be decided in f (n) time and has minimum or maximum size.
In the smallest grammar problem, we are given a word w and we want to compute a preferably small context-free grammar G for the singleton language {w} (where the size of a grammar is the sum of the sizes of its rules, and the size of a rule is measured by the length of its right side). It is known that, for unbounded alphabets, the decision variant of this problem is NP-hard and the optimisation variant does not allow a polynomial-time approximation scheme, unless P = NP. We settle the long-standing open problem whether these hardness results also hold for the more realistic case of a constant-size alphabet. More precisely, it is shown that the smallest grammar problem remains NP-complete (and its optimisation version is APX-hard), even if the alphabet is fixed and has size of at least 17. The corresponding reduction is robust in the sense that it also works for an alternative size-measure of grammars that is commonly used in the literature (i. e., a size measure also taking the number of rules into account), and it also allows to conclude that even computing the number of rules required by a smallest grammar is a hard problem. On the other hand, if the number of nonterminals (or, equivalently, the number of rules) is bounded by a constant, then the smallest grammar problem can be solved in polynomial time, which is shown by encoding it as a problem on graphs with interval structure. However, treating the number of rules as a parameter (in terms of parameterised complexity) yields W[1]-hardness. Furthermore, we present an O(3(vertical bar w vertical bar)) exact exponential-time algorithm, based on dynamic programming. These three main questions are also investigated for 1-level grammars, i. e., grammars for which only the start rule contains nonterminals on the right side; thus, investigating the impact of the "hierarchical depth" of grammars on the complexity of the smallest grammar problem. In this regard, we obtain for 1-level grammars similar, but slightly stronger results.
FORGETTER2 protein phosphatase and phospholipase D modulate heat stress memory in Arabidopsis
(2020)
Plants can mitigate environmental stress conditions through acclimation. In the case of fluctuating stress conditions such as high temperatures, maintaining a stress memory enables a more efficient response upon recurring stress. In a genetic screen forArabidopsis thalianamutants impaired in the memory of heat stress (HS) we have isolated theFORGETTER2(FGT2) gene, which encodes a type 2C protein phosphatase (PP2C) of the D-clade.Fgt2mutants acquire thermotolerance normally; however, they are defective in the memory of HS. FGT2 interacts with phospholipase D alpha 2 (PLD alpha 2), which is involved in the metabolism of membrane phospholipids and is also required for HS memory. In summary, we have uncovered a previously unknown component of HS memory and identified the FGT2 protein phosphatase and PLD alpha 2 as crucial players, suggesting that phosphatidic acid-dependent signaling or membrane composition dynamics underlie HS memory.
Atmospheric dynamics of extreme discharge events from 1979 to 2016 in the southern Central Andes
(2020)
During the South-American Monsoon season, deep convective systems occur at the eastern flank of the Central Andes leading to heavy rainfall and flooding. We investigate the large- and meso-scale atmospheric dynamics associated with extreme discharge events (> 99.9th percentile) observed in two major river catchments meridionally stretching from humid to semi-arid conditions in the southern Central Andes. Based on daily gauge time series and ERA-Interim reanalysis, we made the following three key observations: (1) for the period 1940-2016 daily discharge exhibits more pronounced variability in the southern, semi-arid than in the northern, humid catchments. This is due to a smaller ratio of discharge magnitudes between intermediate (0.2 year return period) and rare events (20 year return period) in the semi-arid compared to the humid areas; (2) The climatological composites of the 40 largest discharge events showed characteristic atmospheric features of cold surges based on 5-day time-lagged sequences of geopotential height at different levels in the troposphere; (3) A subjective classification revealed that 80% of the 40 largest discharge events are mainly associated with the north-northeastward migration of frontal systems and 2/3 of these are cold fronts, i.e. cold surges. This work highlights the importance of cold surges and their related atmospheric processes for the generation of heavy rainfall events and floods in the southern Central Andes.
Apple replant disease (ARD) is a specific apple-related form of soil fertility loss due to unidentified causes and is also known as soil fatigue. The effect typically appears in monoculture production sites and leads to production decreases of up to 50%, even though the cultivation practice remains the same. However, an indication of replant disease is challenged by the lack of specification of the particular microbial group responsible for ARD. The objective of this study was to establish an algorithm for estimating growth suppression in orchards irrespective of the unknowns in the complex causal relationship by assessing plant-soil interaction in the orchard several years after planting. Based on a comparison between no-replant and replant soils, the Alternaria group (Ag) was identified as a soil-fungal population responding to replant with abundance. The trunk cross-sectional area (CSA) was found to be a practical and robust parameter representing below-ground and above-ground tree performance. Suppression of tree vigour was therefore calculated by dividing the two inversely related parameters, Q = ln(Ag)/CSA, as a function of soil-fungal proportions and plant responses at the single-tree level. On this basis, five clusters of tree vigour suppression (Q) were defined: (1) no tree vigour suppression/vital (0%), (2) escalating (- 38%), (3) strong (- 53%), (4) very strong (- 62%), and (5) critical (- 74%). By calculating Q at the level of the single tree, trees were clustered according to tree vigour suppression. The weighted frequency of clusters in the field allowed replant impact to be quantified at field level. Applied to a case study on sandy brown, dry diluvial soils in Brandenburg, Germany, the calculated tree vigour suppression was 46% compared to the potential tree vigour on no-replant soil in the same field. It is highly likely that the calculated growth suppression corresponds to ARD-impact This result is relevant for identifying functional changes in soil and for monitoring the economic effects of soil fatigue in apple orchards, particularly where long-period crop rotation or plot exchange are improbable.
Apple replant disease (ARD) is a specific apple-related form of soil fertility loss due to unidentified causes and is also known as soil fatigue. The effect typically appears in monoculture production sites and leads to production decreases of up to 50%, even though the cultivation practice remains the same. However, an indication of replant disease is challenged by the lack of specification of the particular microbial group responsible for ARD. The objective of this study was to establish an algorithm for estimating growth suppression in orchards irrespective of the unknowns in the complex causal relationship by assessing plant-soil interaction in the orchard several years after planting. Based on a comparison between no-replant and replant soils, the Alternaria group (Ag) was identified as a soil-fungal population responding to replant with abundance. The trunk cross-sectional area (CSA) was found to be a practical and robust parameter representing below-ground and above-ground tree performance. Suppression of tree vigour was therefore calculated by dividing the two inversely related parameters, Q = ln(Ag)/CSA, as a function of soil-fungal proportions and plant responses at the single-tree level. On this basis, five clusters of tree vigour suppression (Q) were defined: (1) no tree vigour suppression/vital (0%), (2) escalating (- 38%), (3) strong (- 53%), (4) very strong (- 62%), and (5) critical (- 74%). By calculating Q at the level of the single tree, trees were clustered according to tree vigour suppression. The weighted frequency of clusters in the field allowed replant impact to be quantified at field level. Applied to a case study on sandy brown, dry diluvial soils in Brandenburg, Germany, the calculated tree vigour suppression was 46% compared to the potential tree vigour on no-replant soil in the same field. It is highly likely that the calculated growth suppression corresponds to ARD-impact This result is relevant for identifying functional changes in soil and for monitoring the economic effects of soil fatigue in apple orchards, particularly where long-period crop rotation or plot exchange are improbable.
The bioactive sphingolipid sphingosine 1-phosphate (S1P) has emerged in the last three decades as main regulator of key cellular processes including cell proliferation, survival, migration and differentiation. A crucial role for this sphingolipid has been recognized in skeletal muscle cell biology both in vitro and in vivo. S1P lyase (SPL) is responsible for the irreversible degradation of S1P and together with sphingosine kinases, the S1P producing enzymes, regulates cellular S1P levels. In this study is clearly showed that the blockade of SPL by pharmacological or RNA interference approaches induces myogenic differentiation of C2C12 myoblasts. Moreover, down-regulation of the specific S1P transporter spinster homolog 2 (Spns2) abrogates myogenic differentiation brought about by SPL inhibition or down-regulation, pointing at a role of extracellular S1P in the pro-myogenic action induced by SPL blockade. Furthermore, also S1P(2) receptor down-regulation was found to abrogate the pro-myogenic effect of SPL blockade. These results provide further proof that inside-out S1P signaling is critically implicated in skeletal muscle biology and provide support to the concept that the specific targeting of SPL could represent an exploitable strategy to treat skeletal muscle disorders.
In this study we investigate two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons by assimilating data from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 into a 3-D diffusion model. In particular, we examine the respective contribution of electromagnetic ion cyclotron (EMIC) wave scattering and magnetopause shadowing for values of the first adiabatic invariant mu ranging from 300 to 3,000 MeV G(-1). We inspect the innovation vector and perform a statistical analysis to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. Our results are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. We show that EMIC wave scattering tends to dominate loss at lower L shells, and it may amount to between 10%/hr and 30%/hr of the maximum value of phase space density (PSD) over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
Performance- and healthrelated benefits of yoThere is ample evidence that youth resistance training (RT) is safe, joyful, and effective for different markers of performance (e.g., muscle strength, power, linear sprint speed) and health (e.g., injury prevention). Accordingly, the first aim of this narrative review is to present and discuss the relevance of muscle strength for youth physical development. The second purpose is to report evidence on the effectiveness of RT on muscular fitness (muscle strength, power, muscle endurance), on movement skill performance and injury prevention in youth. There is evidence that RT is effective in enhancing measures of muscle fitness in children and adolescents, irrespective of sex. Additionally, numerous studies indicate that RT has positive effects on fundamental movement skills (e.g., jumping, running, throwing) in youth regardless of age, maturity, training status, and sex. Further, irrespective of age, sex, and training status, regular exposure to RT (e.g., plyometric training) decreases the risk of sustaining injuries in youth. This implies that RT should be a meaningful element of youths’ exercise programming. This has been acknowledged by global (e.g., World Health Organization) and national (e.g., National Strength and Conditioning Association) health- and performance-related organizations which is why they recommended to perform RT as an integral part of weekly exercise programs to promote muscular strength, fundamental movement skills, and to resist injuries in youth.uth resistance training
Background
Change-of-direction (CoD) speed is a physical fitness attribute in many field-based team and individual sports. To date, no systematic review with meta-analysis available has examined the effects of resistance training (RT) on CoD speed in youth and adults.
Objective
To aggregate the effects of RT on CoD speed in youth and young physically active and athletic adults, and to identify the key RT programme variables for training prescription.
Data sources
A systematic literature search was conducted with PubMed, Web of Science, and Google Scholar, with no date restrictions, up to October 2019, to identify studies related to the effects of RT on CoD speed.
Study Eligibility Criteria
Only controlled studies with baseline and follow-up measures were included if they examined the effects of RT (i.e., muscle actions against external resistances) on CoD speed in healthy youth (8-18 years) and young physically active/athletic male or female adults (19-28 years).
Study Appraisal and Synthesis Methods
A random-effects model was used to calculate weighted standardised mean differences (SMD) between intervention and control groups. In addition, an independent single training factor analysis (i.e., RT frequency, intensity, volume) was undertaken. Further, to verify if any RT variable moderated effects on CoD speed, a multivariate random-effects meta-regression was conducted. The methodological quality of the included studies was assessed using the physiotherapy evidence database (PEDro) scale.
Results
Fifteen studies, comprising 19 experimental groups, were included. The methodological quality of the studies was acceptable with a median PEDro score of 6. There was a significant large effect size of RT on CoD speed across all studies (SMD = - 0.82 [- 1.14 to - 0.49]). Subgroup analyses showed large effect sizes on CoD speed in males (SMD = - 0.95) contrasting with moderate improvements in females (SMD = - 0.60). There were large effect sizes on CoD speed in children (SMD = - 1.28) and adolescents (SMD = - 1.21) contrasting with moderate effects in adults (SMD = - 0.63). There was a moderate effect in elite athletes (SMD = - 0.69) contrasting with a large effect in subelite athletes (SMD = - 0.86). Differences between subgroups were not statistically significant. Similar improvements were observed regarding the effects of independently computed training variables. In terms of RT frequency, our results indicated that two sessions per week induced large effects on CoD speed (SMD = - 1.07) while programmes with three sessions resulted in moderate effects (SMD = - 0.53). For total training intervention duration, we observed large effects for <= 8 weeks (SMD = - 0.81) and > 8 weeks (SMD = - 0.85). For single session duration, we found large effects for <= 30 min and >= 45 min (both SMD = - 1.00). In terms of number of training sessions, we identified large effects for <= 16 sessions (SMD = - 0.83) and > 16 sessions (SMD = - 0.81). For training intensity, we found moderate effects for light-to-moderate (SMD = - 0.76) and vigorous-to-near maximal intensities (SMD = - 0.77). With regards to RT type, we observed large effects for free weights (SMD = - 0.99) and machine-based training (SMD = - 0.80). For combined free weights and machine-based training, moderate effects were identified (SMD = - 0.77). The meta-regression outcomes showed that none of the included training variables significantly predicted the effects of RT on CoD speed (R-2 = 0.00).
Conclusions
RT seems to be an effective means to improve CoD speed in youth and young physically active and athletic adults. Our findings indicate that the impact of RT on CoD speed may be more prominent in males than in females and in youth than in adults. Additionally, independently computed single factor analyses for different training variables showed that higher compared with lower RT intensities, frequencies, and volumes appear not to have an advantage on the magnitude of CoD speed improvements. In terms of RT type, similar improvements were observed following machine-based and free weights training.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
Large real-world networks typically follow a power-law degree distribution. To study such networks, numerous random graph models have been proposed. However, real-world networks are not drawn at random. Therefore, Brach et al. (27th symposium on discrete algorithms (SODA), pp 1306-1325, 2016) introduced two natural deterministic conditions: (1) a power-law upper bound on the degree distribution (PLB-U) and (2) power-law neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLB-N). They showed that many real-world networks satisfy both properties and exploit them to design faster algorithms for a number of classical graph problems. We complement their work by showing that some well-studied random graph models exhibit both of the mentioned PLB properties. PLB-U and PLB-N hold with high probability for Chung-Lu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part we study three classical NP-hard optimization problems on PLB networks. It is known that on general graphs with maximum degree Delta, a greedy algorithm, which chooses nodes in the order of their degree, only achieves a Omega (ln Delta)-approximation forMinimum Vertex Cover and Minimum Dominating Set, and a Omega(Delta)-approximation forMaximum Independent Set. We prove that the PLB-U property with beta>2 suffices for the greedy approach to achieve a constant-factor approximation for all three problems. We also show that these problems are APX-hard even if PLB-U, PLB-N, and an additional power-law lower bound on the degree distribution hold. Hence, a PTAS cannot be expected unless P = NP. Furthermore, we prove that all three problems are in MAX SNP if the PLB-U property holds.
The study of the Cauchy problem for solutions of the heat equation in a cylindrical domain with data on the lateral surface by the Fourier method raises the problem of calculating the inverse Laplace transform of the entire function cos root z. This problem has no solution in the standard theory of the Laplace transform. We give an explicit formula for the inverse Laplace transform of cos root z using the theory of analytic functionals. This solution suits well to efficiently develop the regularization of solutions to Cauchy problems for parabolic equations with data on noncharacteristic surfaces.
Gadolinium-doped ceria or gadolinium-stabilized ceria (GDC) is an important technical material due to its ability to conduct O2- ions, e.g., used in solid oxide fuel cells operated at intermediate temperature as an electrolyte, diffusion barrier, and electrode component. We have synthesized Ce1-xGdxO2-y:Eu3+ (0 <= x <= 0.4) nanoparticles (11-15 nm) using a scalable spray pyrolysis method, which allows the continuous large-scale technical production of such materials. Introducing Eu3+ ions in small amounts into ceria and GDC as spectroscopic probes can provide detailed information about the atomic structure and local environments and allows us to monitor small structural changes. This study presents a novel approach to structurally elucidate europium-doped Ce1-xGdxO2-y:Eu3+ nanoparticles by way of Eu3+ spectroscopy, processing the spectroscopic data with the multiway decomposition method parallel factor (PARAFAC) analysis. In order to perform the deconvolution of spectra, data sets of excitation wavelength, emission wavelength, and time are required. Room temperature, time-resolved emission spectra recorded at lambda(ex) = 464 nm show that Gd3+ doping results in significantly altered emission spectra compared to pure ceria. The PARAFAC analysis for the pure ceria samples reveals a high-symmetry species (which can also be probed directly via the CeO2 charge transfer band) and a low-symmetry species. The GDC samples yield two low-symmetry spectra in the same experiment. High-resolution emission spectra recorded under cryogenic conditions after probing the D-5(0)-F-7(0) transition at lambda(ex) = 575-583 nm revealed additional variation in the low-symmetry Eu3+ sites in pure ceria and GDC. The total luminescence spectra of CeO2-y:Eu3+ showed Eu3+ ions located in at least three slightly different coordination environments with the same fundamental symmetry, whereas the overall hypsochromic shift and increased broadening of the D-5(0)-F-7(0) excitation in the GDC samples, as well as the broadened spectra after deconvolution point to less homogeneous environments. The data of the Gd3+-containing samples indicates that the average charge density around the Eu3+ ions in the lattice is decreased with increasing Gd3+ and oxygen vacancy concentration. For reference, the Judd-Ofelt parameters of all spectra were calculated. PARAFAC proves to be a powerful tool to analyze lanthanide spectra in crystalline solid materials, which are characterized by numerous Stark transitions and where measurements usually yield a superposition of different contributions to any given spectrum.
Porous ceramic diesel particulate filters (DPFs) are extruded products that possess macroscopic anisotropic mechanical and thermal properties. This anisotropy is caused by both morphological features (mostly the orientation of porosity) and crystallographic texture. We systematically studied those two aspects in two aluminum titanate ceramic materials of different porosity using mercury porosimetry, gas adsorption, electron microscopy, X-ray diffraction, and X-ray refraction radiography. We found that a lower porosity content implies a larger isotropy of both the crystal texture and the porosity orientation. We also found that, analogous to cordierite, crystallites do align with their axis of negative thermal expansion along the extrusion direction. However, unlike what found for cordierite, the aluminium titanate crystallite form is such that a more pronounced (0 0 2) texture along the extrusion direction implies porosity aligned perpendicular to it.
Pavlovian-to-instrumental transfer (PIT) tasks examine the influence of Pavlovian stimuli on ongoing instrumental behaviour. Previous studies reported associations between a strong PIT effect, high-risk drinking and alcohol use disorder. This study investigated whether susceptibility to interference between Pavlovian and instrumental control is linked to risky alcohol use in a community sample of 18-year-old male adults. Participants (N = 191) were instructed to 'collect good shells' and 'leave bad shells' during the presentation of appetitive (monetary reward), aversive (monetary loss) or neutral Pavlovian stimuli. We compared instrumental error rates (ER) and functional magnetic resonance imaging (fMRI) brain responses between the congruent and incongruent conditions, as well as among high-risk and low-risk drinking groups. On average, individuals showed a substantial PIT effect, that is, increased ER when Pavlovian cues and instrumental stimuli were in conflict compared with congruent trials. Neural PIT correlates were found in the ventral striatum and the dorsomedial and lateral prefrontal cortices (lPFC). Importantly, high-risk drinking was associated with a stronger behavioural PIT effect, a decreased lPFC response and an increased neural response in the ventral striatum on the trend level. Moreover, high-risk drinkers showed weaker connectivity from the ventral striatum to the lPFC during incongruent trials. Our study links interference during PIT to drinking behaviour in healthy, young adults. High-risk drinkers showed higher susceptibility to Pavlovian cues, especially when they conflicted with instrumental behaviour, indicating lower interference control abilities. Increased activity in the ventral striatum (bottom-up), decreased lPFC response (top-down), and their altered interplay may contribute to poor interference control in the high-risk drinkers.
This work introduces an embedded approach for the prediction of Solar Particle Events (SPEs) in space applications by combining the real-time Soft Error Rate (SER) measurement with SRAM-based detector and the offline trained machine learning model. The proposed approach is intended for the self-adaptive fault-tolerant multiprocessing systems employed in space applications. With respect to the state-of-the-art, our solution allows for predicting the SER 1 h in advance and fine-grained hourly tracking of SER variations during SPEs as well as under normal conditions. Therefore, the target system can activate the appropriate mechanisms for radiation hardening before the onset of high radiation levels. Based on the comparison of five different machine learning algorithms trained with the public space flux database, the preliminary results indicate that the best prediction accuracy is achieved with the recurrent neural network (RNN) with long short-term memory (LSTM).
Dysregulation of physiological stress reactivity plays a key role in the development and relapse risk of alcohol dependence. This article reviews studies investigating physiological responses to experimentally induced acute stress in patients with alcohol dependence. A systematic search from electronic databases resulted in 3641 articles found and after screening 62 articles were included in our review. Studies are analyzed based on stress types (i.e., social stress tasks and nonsocial stress tasks) and physiological markers (i.e., the nervous system, the endocrine system, somatic responses and the immune system). In studies applying nonsocial stress tasks, alcohol-dependent patients were reported to show a blunted stress response compared with healthy controls in the majority of studies applying markers of adrenocorticotropic hormone and cortisol. In studies applying social stress tasks, findings are inconsistent, with less than half of the studies reporting altered physiological stress responses in patients. We discuss the impact of duration of abstinence, comorbidities, baseline physiological arousal and intervention on the discrepancy of study findings. Furthermore, we review evidence for an associationbetween blunted physiological stress responses and the relapse risk among patients with alcohol dependence. (c) 2020 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Efficient and low-cost anode materials for the sodium-ion battery are highly desired to enable more economic energy storage. Effects on an ultrathin carbon nitride film deposited on a copper metal electrode are presented. The combination of effects show an unusually high capacity to store sodium metal. The g-C3N4 film is as thin as 10 nm and can be fabricated by an efficient, facile, and general chemical-vapor deposition method. A high reversible capacity of formally up to 51 Ah g(-1) indicates that the Na is not only stored in the carbon nitride as such, but that carbon nitride activates also the metal for reversible Na-deposition, while forming at the same time an solid electrolyte interface layer avoiding direct contact of the metallic phase with the liquid electrolyte.
Temperature changes and variations in pore fluid salinity may negatively affect the permeability of clay-bearing sandstones with implications for natural fluid flow and geotechnical applications alike. In this study these factors are investigated for a sandstone dominated by illite as the clay phase. Systematic long-term flow-through experiments were conducted and complemented with comprehensive microstructural investigations and the application of Derjaguin-Landau-Verwey-Overbeek (DLVO) theory to explain mechanistically the observed permeability changes. Initially, sample permeability was not affected by low pore fluid salinity indicating strong attraction of the illite particles to the pore walls as supported by electron microprobe analysis (EMPA). Increasing temperature up to 145 degrees C resulted in an irreversible permeability decrease by 1.5 orders of magnitude regardless of the pore fluid composition (i.e., deionized water and 2 M NaCl solution). Subsequently diluting the high salinity pore fluid to below 0.5 M yielded an additional permeability decline by 1.5 orders of magnitude, both at 145 degrees C and after cooling to room temperature. By applying scanning electron microscopy (SEM) and mercury intrusion porosimetry (MIP) thermo-mechanical pore throat closure and illite particle migration were identified as independently operating mechanisms responsible for observed permeability changes during heating and dilution, respectively. These observations indicate that permeability of illite-bearing sandstones will be impaired by heating and exposure to low salinity pore fluids. In addition, chemically induced permeability variations proved to be path dependent with respect to the applied succession of fluid salinity changes.
Fractures efficiently affect fluid flow in geological formations, and thereby determine mass and energy transport in reservoirs, which are not least exploited for economic resources. In this context, their response to mechanical and thermal changes, as well as fluid-rock interactions, is of paramount importance. In this study, a two-stage flow-through experiment was conducted on a pure quartz sandstone core of low matrix permeability, containing one single macroscopic tensile fracture. In the first short-term stage, the effects of mechanical and hydraulic aperture on pressure and temperature cycles were investigated. The purpose of the subsequent intermittent-flow long-term (140 days) stage was to constrain the evolution of the geometrical and hydraulic fracture properties resulting from pressure solution. Deionized water was used as the pore fluid, and permeability, as well as the effluent Si concentrations, were systematically measured. Overall, hydraulic aperture was shown to be significantly less affected by pressure, temperature and time, in comparison to mechanical aperture. During the long-term part of the experiment at 140 degrees C, the effluent Si concentrations likely reached a chemical equilibrium state within less than 8 days of stagnant flow, and exceeded the corresponding hydrostatic quartz solubility at this temperature. This implies that the pressure solution was active at the contacting fracture asperities, both at 140 degrees C and after cooling to 33 degrees C. The higher temperature yielded a higher dissolution rate and, consequently, a faster attainment of chemical equilibrium within the contact fluid. X-ray mu CT observations evidenced a noticeable increase in fracture contact area ratio, which, in combination with theoretical considerations, implies a significant decrease in mechanical aperture. In contrast, the sample permeability, and thus the hydraulic fracture aperture, virtually did not vary. In conclusion, pressure solution-induced fracture aperture changes are affected by the degree of time-dependent variations in pore fluid composition. In contrast to the present case of a quasi-closed system with mostly stagnant flow, in an open system with continuous once-through fluid flow, the activity of the pressure solution may be amplified due to the persistent fluid-chemical nonequilibrium state, thus possibly enhancing aperture and fracture permeability changes.
The higher education structure in Malaysia has experienced significant changes since the implementation of the Private Higher Educational Institutions Act of 1996. The unprecedented expansion of the higher education sector and the increasing autonomy conferred to universities have created a huge demand for competent university leadership that supports the development of higher education in Malaysia. This article discusses the very first national multiplication training in Malaysia in 2014 and analyses such out-comes as the identification of good practices for future initiatives and applications in university leadership training.
We investigate the initiation and early evolution of 12 solar eruptions, including six active-region hot channel and six quiescent filament eruptions, which were well observed by the Solar Dynamics Observatory, as well as by the Solar Terrestrial Relations Observatory for the latter. The sample includes one failed eruption and 11 coronal mass ejections, with velocities ranging from 493 to 2140 km s(-1). A detailed analysis of the eruption kinematics yields the following main results. (1) The early evolution of all events consists of a slow-rise phase followed by a main-acceleration phase, the height-time profiles of which differ markedly and can be best fit, respectively, by a linear and an exponential function. This indicates that different physical processes dominate in these phases, which is at variance with models that involve a single process. (2) The kinematic evolution of the eruptions tends to be synchronized with the flare light curve in both phases. The synchronization is often but not always close. A delayed onset of the impulsive flare phase is found in the majority of the filament eruptions (five out of six). This delay and its trend to be larger for slower eruptions favor ideal MHD instability models. (3) The average decay index at the onset heights of the main acceleration is close to the threshold of the torus instability for both groups of events (although, it is based on a tentative coronal field model for the hot channels), suggesting that this instability initiates and possibly drives the main acceleration.
Five known compounds (1-5) were isolated from the extract of Mundulea sericea leaves. Similar investigation of the roots of this plant afforded an additional three known compounds (6-8). The structures were elucidated using NMR spectroscopic and mass spectrometric analyses. The absolute configuration of 1 was established using ECD spectroscopy. In an antiplasmodial activity assay, compound 1 showed good activity with an IC50 of 2.0 mu M against chloroquine-resistant W2, and 6.6 mu M against the chloroquine-sensitive 3D7 strains of Plasmodium falciparum. Some of the compounds were also tested for antileishmanial activity. Dehydrolupinifolinol (2) and sericetin (5) were active against drug-sensitive Leishmania donovani (MHOM/IN/83/AG83) with IC50 values of 9.0 and 5.0 mu M, respectively. In a cytotoxicity assay, lupinifolin (3) showed significant activity on BEAS-2B (IC50 4.9 mu M) and HePG2 (IC50 10.8 mu M) human cell lines. All the other compounds showed low cytotoxicity (IC50 > 30 mu M) against human lung adenocarcinoma cells (A549), human liver cancer cells (HepG2), lung/bronchus cells (epithelial virus transformed) (BEAS-2B) and immortal human hepatocytes (LO2)
We consider several examples of dynamical systems demonstrating overlapping attractor and repeller. These systems are constructed via introducing controllable dissipation to prototypic models with chaotic dynamics (Anosov cat map, Chirikov standard map, and incompressible three-dimensional flow of the ABC-type on a three-torus) and ergodic non-chaotic behavior (skew-shift map). We employ the Kantorovich-Rubinstein-Wasserstein distance to characterize the difference between the attractor and the repeller, in dependence on the dissipation level.
What Colin Reynolds could tell us about nutrient limitation, N:P ratios and eutrophication control
(2020)
Colin Reynolds exquisitely consolidated our understanding of driving forces shaping phytoplankton communities and those setting the upper limit to biomass yield, with limitation typically shifting from light in winter to phosphorus in spring. Nonetheless, co-limitation is frequently postulated from enhanced growth responses to enrichments with both N and P or from N:P ranging around the Redfield ratio, concluding a need to reduce both N and P in order to mitigate eutrophication. Here, we review the current understanding of limitation through N and P and of co-limitation. We conclude that Reynolds is still correct: (i) Liebig's law of the minimum holds and reducing P is sufficient, provided concentrations achieved are low enough; (ii) analyses of nutrient limitation need to exclude evidently non-limiting situations, i.e. where soluble P exceeds 3-10 mu g/l, dissolved N exceeds 100-130 mu g/l and total P and N support high biomass levels with self-shading causing light limitation; (iii) additionally decreasing N to limiting concentrations may be useful in specific situations (e.g. shallow waterbodies with high internal P and pronounced denitrification); (iv) management decisions require local, situation-specific assessments. The value of research on stoichiometry and co-limitation lies in promoting our understanding of phytoplankton ecophysiology and community ecology.
Stellar atmosphere modeling and chemical abundance determinations require the knowledge of spectral line shapes. Spectral lines of chromium in various ionization stages are common in stellar spectra but detailed data on Stark broadening for them is scarce. Recently we reported on the first calculations of Stark widths for several 4s-4p transitions of double-ionized chromium, employing the Modified Semi-Empirical approach (MSE). In this work we present applications of the data to spectrum synthesis of Cr III lines in the ultraviolet region of hot stars. The Atlas9 model atmosphere code and the line-formation code Surface were used with the assumption of local thermodynamic equilibrium. The improvements of adopting the MSE broadening tables instead of approximate Stark broadening coefficients are investigated for a total of 56 Cr III lines visible in HST/STIS spectra of the B3 subgiant star Iota Herculis and the subdwarf B-star Feige 66.
Abdominal and general adiposity are independently associated with mortality, but there is no consensus on how best to assess abdominal adiposity. We compared the ability of alternative waist indices to complement body mass index (BMI) when assessing all-cause mortality. We used data from 352,985 participants in the European Prospective Investigation into Cancer and Nutrition (EPIC) and Cox proportional hazards models adjusted for other risk factors. During a mean follow-up of 16.1 years, 38,178 participants died. Combining in one model BMI and a strongly correlated waist index altered the association patterns with mortality, to a predominantly negative association for BMI and a stronger positive association for the waist index, while combining BMI with the uncorrelated A Body Shape Index (ABSI) preserved the association patterns. Sex-specific cohort-wide quartiles of waist indices correlated with BMI could not separate high-risk from low-risk individuals within underweight (BMI<18.5 kg/m(2)) or obese (BMI30 kg/m(2)) categories, while the highest quartile of ABSI separated 18-39% of the individuals within each BMI category, which had 22-55% higher risk of death. In conclusion, only a waist index independent of BMI by design, such as ABSI, complements BMI and enables efficient risk stratification, which could facilitate personalisation of screening, treatment and monitoring.
Obesity is a risk factor for several major cancers. Associations of weight change in middle adulthood with cancer risk, however, are less clear. We examined the association of change in weight and body mass index (BMI) category during middle adulthood with 42 cancers, using multivariable Cox proportional hazards models in the European Prospective Investigation into Cancer and Nutrition cohort. Of 241 323 participants (31% men), 20% lost and 32% gained weight (>0.4 to 5.0 kg/year) during 6.9 years (average). During 8.0 years of follow-up after the second weight assessment, 20 960 incident cancers were ascertained. Independent of baseline BMI, weight gain (per one kg/year increment) was positively associated with cancer of the corpus uteri (hazard ratio [HR] = 1.14; 95% confidence interval: 1.05-1.23). Compared to stable weight (+/- 0.4 kg/year), weight gain (>0.4 to 5.0 kg/year) was positively associated with cancers of the gallbladder and bile ducts (HR = 1.41; 1.01-1.96), postmenopausal breast (HR = 1.08; 1.00-1.16) and thyroid (HR = 1.40; 1.04-1.90). Compared to maintaining normal weight, maintaining overweight or obese BMI (World Health Organisation categories) was positively associated with most obesity-related cancers. Compared to maintaining the baseline BMI category, weight gain to a higher BMI category was positively associated with cancers of the postmenopausal breast (HR = 1.19; 1.06-1.33), ovary (HR = 1.40; 1.04-1.91), corpus uteri (HR = 1.42; 1.06-1.91), kidney (HR = 1.80; 1.20-2.68) and pancreas in men (HR = 1.81; 1.11-2.95). Losing weight to a lower BMI category, however, was inversely associated with cancers of the corpus uteri (HR = 0.40; 0.23-0.69) and colon (HR = 0.69; 0.52-0.92). Our findings support avoiding weight gain and encouraging weight loss in middle adulthood.
The 2020 European Bioinformatics Community for Mass Spectrometry (EuBIC-MS) Developers’ meeting was held from January 13th to January 17th 2020 in Nyborg, Denmark. Among the participants were scientists as well as developers working in the field of computational mass spectrometry (MS) and proteomics. The 4-day program was split between introductory keynote lectures and parallel hackathon sessions. During the latter, the participants developed bioinformatics tools and resources addressing outstanding needs in the community. The hackathons allowed less experienced participants to learn from more advanced computational MS experts, and to actively contribute to highly relevant research projects. We successfully produced several new tools that will be useful to the proteomics community by improving data analysis as well as facilitating future research. All keynote recordings are available on https://doi.org/10.5281/zenodo.3890181.
Although a relatively large number of studies on acquired language impairments have tested the case of derivational morphology, none of these have specifically investigated whether there are differences in how prefixed and suffixed derived words are impaired. Based on linguistic and psycholinguistic considerations on prefixed and suffixed derived words, differences in how these two types of derivations are processed, and consequently impaired, are predicted. In the present study, we investigated the errors produced in reading aloud simple, prefixed, and suffixed words by three German individuals with agrammatic aphasia (NN, LG, SA). We found that, while NN and LG produced similar numbers of errors with prefixed and suffixed words, SA showed a selective impairment for prefixed words. Furthermore, NN and SA produced more errors specifically involving the affix with prefixed words than with suffixed words. We discuss our findings in terms of relative position of stem and affix in prefixed and suffixed words, as well as in terms of specific properties of prefixes and suffixes.
African languages have rarely been the subject of psycholinguistic experimentation. The current study employs a masked visual priming experiment to investigate morphological processing in a Bantu language, Setswana. Our study takes advantage of the rich system of prefixes in Bantu languages, which offers the opportunity of testing morphological priming effects from prefixed inflected words and directly comparing them to priming effects from prefixed derived words on the same targets. We found significant priming effects of similar magnitude for both prefixed inflected and derived word forms, which were clearly dissociable from prime-target relatedness in both meaning and (orthographic) form. These findings provide support for a (possibly universal) mechanism of morphological decomposition applied during early visual word recognition that segments both (prefixed) inflected and derived word forms into their morphological constituents.
Droughts in tropical South America have an imminent and severe impact on the Amazon rainforest and affect the livelihoods of millions of people. Extremely dry conditions in Amazonia have been previously linked to sea surface temperature (SST) anomalies in the adjacent tropical oceans. Although the sources and impacts of such droughts have been widely studied, establishing reliable multi-year lead statistical forecasts of their occurrence is still an ongoing challenge. Here, we further investigate the relationship between SST and rainfall anomalies using a complex network approach. We identify four ocean regions which exhibit the strongest overall SST correlations with central Amazon rainfall, including two particularly prominent regions in the northern and southern tropical Atlantic. Based on the time-dependent correlation between SST anomalies in these two regions alone, we establish a new early-warning method for droughts in the central Amazon basin and demonstrate its robustness in hindcasting past major drought events with lead-times up to 18 months.
This study examines the processing of morphologically complex words focusing on how morphological (in addition to orthographic and semantic) factors affect bilingual word recognition. We report findings from a large experimental study with groups of bilingual (Turkish/German) speakers using the visual masked-priming technique. We found morphologically mediated effects on the response speed and the inter-individual variability within the bilingual participant group. We conclude that the grammar (qua morphological parsing) not only enhances speed of processing in bilingual language processing but also yields more uniform performance and thereby constrains variability within a group of otherwise heterogeneous individuals.
The purpose of this paper is to build an algebraic framework suited to regularize branched structures emanating from rooted forests and which encodes the locality principle. This is achieved by means of the universal properties in the locality framework of properly decorated rooted forests. These universal properties are then applied to derive the multivariate regularization of integrals indexed by rooted forests. We study their renormalization, along the lines of Kreimer's toy model for Feynman integrals.
Arborified zeta values are defined as iterated series and integrals using the universal properties of rooted trees. This approach allows to study their convergence domain and to relate them to multiple zeta values. Generalisations to rooted trees of the stuffle and shuffle products are defined and studied. It is further shown that arborified zeta values are algebra morphisms for these new products on trees.
Temporal variation of natural light sources such as airglow limits the ability of night light sensors to detect changes in small sources of artificial light (such as villages). This study presents a method for correcting for this effect globally, using the satellite radiance detected from regions without artificial light emissions. We developed a routine to define an approximate grid of locations worldwide that do not have regular light emission. We apply this method with a 5 degree equally spaced global grid (total of 2016 individual locations), using data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day-Night Band (DNB). This code could easily be adapted for other future global sensors. The correction reduces the standard deviation of data in the Earth Observation Group monthly DNB composites by almost a factor of two. The code and datasets presented here are available under an open license by GFZ Data Services, and are implemented in the Radiance Light Trends web application.
We compare the toxicity of microplastics, microfibres and nanoplastics on mussels. Mussels (Mytilus spp.) were exposed to 500 ng mL(-1) of 20 mu m polystyrene microplastics, 10 x 30 mu m polyamide microfibres or 50 nm polystyrene nanoplastics for 24 h or 7 days. Biomarkers of immune response, oxidative stress response, lysosomal destabilisation and genotoxic damage were measured in haemolymph, digestive gland and gills. Microplastics and microfibres were observed in the digestive glands, with significantly higher plastic concentrations after 7-days exposure (ANOVA, P < 0.05). Nanoplastics had a significant effect on hyalinocytegranulocyte ratios (ANOVA, P < 0.05), indicative of a heightened immune response. SOD activity was significantly increased followed 24 h exposure to plastics (two-way ANOVA, P < 0.05), but returned to normal levels after 7-days exposure. No evidence of lysosomal destabilisation or genotoxic damage was observed from any form of plastic. The study highlights how particle size is a key factor in plastic particulate toxicity.
Width control on event-scale deposition and evacuation of sediment in bedrock-confined channels
(2020)
In mixed bedrock-alluvial rivers, the response of the system to a flood event can be affected by a number of factors, including coarse sediment availability in the channel, sediment supply from the hillslopes and upstream, flood sequencing and coarse sediment grain size distribution. However, the impact of along-stream changes in channel width on bedload transport dynamics remains largely unexplored. We combine field data, theory and numerical modelling to address this gap. First, we present observations from the Daan River gorge in western Taiwan, where the river flows through a 1 km long 20-50 m wide bedrock gorge bounded upstream and downstream by wide braidplains. We documented two flood events during which coarse sediment evacuation and redeposition appear to cause changes of up to several metres in channel bed elevation. Motivated by this case study, we examined the relationships between discharge, channel width and bedload transport capacity, and show that for a given slope narrow channels transport bedload more efficiently than wide ones at low discharges, whereas wider channels are more efficient at high discharges. We used the model sedFlow to explore this effect, running a random sequence of floods through a channel with a narrow gorge section bounded upstream and downstream by wider reaches. Channel response to imposed floods is complex, as high and low discharges drive different spatial patterns of erosion and deposition, and the channel may experience both of these regimes during the peak and recession periods of each flood. Our modelling suggests that width differences alone can drive substantial variations in sediment flux and bed response, without the need for variations in sediment supply or mobility. The fluctuations in sediment transport rates that result from width variations can lead to intermittent bed exposure, driving incision in different segments of the channel during different portions of the hydrograph.
In precipitation nowcasting, it is common to track the motion of precipitation in a sequence of weather radar images and to extrapolate this motion into the future. The total error of such a prediction consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow isolating the extent of location errors, making it difficult to specifically improve nowcast models with regard to location prediction. In this paper, we introduce a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time Delta t ahead of the forecast time t corresponds to the Euclidean distance between the observed and the predicted feature locations at t + Delta t. Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the German Weather Service. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion from t - 1 to t (LK-Lin1) and t - 4 to t (LK-Lin4) and the other two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear (DIS-Lin1) and Semi-Lagrangian extrapolation (DIS-Rot1). Of those four models, DIS-Lin1 and LK-Lin4 turned out to be the most skillful with regard to the prediction of feature location, while we also found that the model skill dramatically depends on the sinuosity of the observed tracks. The dataset of 376,125 detected feature tracks in 2016 is openly available to foster the improvement of location prediction in extrapolation-based nowcasting models.
In this paper we examine the relationship between the default risk of banks and sovereigns, i.e. the 'doom-loop'. Specifically, we try to assess the effectiveness of the implementation of the new recovery and resolution framework in the European Union. We use a panel with daily data on European banks and sovereigns ranging from 2012 to 2016 in order to test the effects of the Bank Recovery and Resolution Directive on the two-way feedback process. We find that there was a pronounced feedback loop between banks and sovereigns from 2012 to 2014. However, after the implementation of the European Banking Union, in 2015/2016, the magnitude of the doom-loop decreased and the spillovers became not statistically significant. Furthermore, our results suggest that the implementation of the new resolution framework is a suitable candidate to explain this finding. Overall, the results are robust across several specifications.
Adverse environmental conditions are detrimental to plant growth and development. Acclimation to abiotic stress conditions involves activation of signaling pathways which often results in changes in gene expression via networks of transcription factors (TFs). Mediator is a highly conserved co-regulator complex and an essential component of the transcriptional machinery in eukaryotes. Some Mediator subunits have been implicated in stress-responsive signaling pathways; however, much remains unknown regarding the role of plant Mediator in abiotic stress responses. Here, we use RNA-seq to analyze the transcriptional response of Arabidopsis thaliana to heat, cold and salt stress conditions. We identify a set of common abiotic stress regulons and describe the sequential and combinatorial nature of TFs involved in their transcriptional regulation. Furthermore, we identify stress-specific roles for the Mediator subunits MED9, MED16, MED18 and CDK8, and putative TFs connecting them to different stress signaling pathways. Our data also indicate different modes of action for subunits or modules of Mediator at the same gene loci, including a co-repressor function for MED16 prior to stress. These results illuminate a poorly understood but important player in the transcriptional response of plants to abiotic stress and identify target genes and mechanisms as a prelude to further biochemical characterization.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage. The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
CuI has been recently rediscovered as a p-type transparent conductor with a high figure of merit. Even though many metal iodides are hygroscopic, the effect of moisture on the electrical properties of CuI has not been clarified. In this work, we observe a 2-fold increase in the conductivity of CuI after exposure to ambient humidity for 5 h, followed by slight long-term degradation. Simultaneously, the work function of CuI decreases by almost 1 eV, which can explain the large spread in the previously reported work function values. The conductivity increase is partially reversible and is maximized at intermediate humidity levels. On the basis of the large intragrain mobility measured by THz spectroscopy, we suggest that hydration of grain boundaries may be beneficial for the overall hole mobility.
In the stable marriage problem, a set of men and a set of women are given, each of whom has a strictly ordered preference list over the acceptable agents in the opposite class. A matching is called stable if it is not blocked by any pair of agents, who mutually prefer each other to their respective partner. Ties in the preferences allow for three different definitions for a stable matching: weak, strong and super-stability. Besides this, acceptable pairs in the instance can be restricted in their ability of blocking a matching or being part of it, which again generates three categories of restrictions on acceptable pairs. Forced pairs must be in a stable matching, forbidden pairs must not appear in it, and lastly, free pairs cannot block any matching.
Our computational complexity study targets the existence of a stable solution for each of the three stability definitions, in the presence of each of the three types of restricted pairs. We solve all cases that were still open. As a byproduct, we also derive that the maximum size weakly stable matching problem is hard even in very dense graphs, which may be of independent interest.
An unceasing problem of our prevailing society is the fair division of goods. The problem of proportional cake cutting focuses on dividing a heterogeneous and divisible resource, the cake, among n players who value pieces according to their own measure function. The goal is to assign each player a not necessarily connected part of the cake that the player evaluates at least as much as her proportional share. <br /> In this article, we investigate the problem of proportional division with unequal shares, where each player is entitled to receive a predetermined portion of the cake. Our main contribution is threefold. First we present a protocol for integer demands, which delivers a proportional solution in fewer queries than all known protocols. By giving a matching lower bound, we then show that our protocol is asymptotically the fastest possible. Finally, we turn to irrational demands and solve the proportional cake cutting problem by reducing it to the same problem with integer demands only. All results remain valid in a highly general cake cutting model, which can be of independent interest.
Matching participants (as suggested by Hope, 2015) may be one promising option for research on a potential bilingual advantage in executive functions (EF). In this study we first compared performances in three EF-tasks of a naturally heterogeneous sample of monolingual (n = 69, age = 9.0 y) and multilingual children (n = 57, age = 9.3 y). Secondly, we meticulously matched participants pairwise to obtain two highly homogeneous groups to rerun our analysis and investigate a potential bilingual advantage. The initally disadvantaged multilinguals (regarding socioeconomic status and German lexicon size) performed worse in updating and response inhibition, but similarly in interference inhibition. This indicates that superior EF compensate for the detrimental effects of the background variables. After matching children pairwise on age, gender, intelligence, socioeconomic status and German lexicon size, performances became similar except for interference inhibition. Here, an advantage for multilinguals in the form of globally reduced reaction times emerged, indicating a bilingual executive processing advantage.
Number processing induces spatial attention shifts to the left or right side for small or large numbers, respectively. This spatial-numerical association (SNA) extends to mental calculation, such that subtractions and additions induce left or right biases, respectively. However, the time course of activating SNAs during mental calculation is unclear. Here, we addressed this issue by measuring visual position discrimination during auditory calculation. Thirty-four healthy adults listened in each trial to five successive elements of arithmetic facts (first operand, operator, second operand, equal and result) and verbally classified their correctness. After each element (except for the result), a fixation dot moved equally often to either the left or right side and participants pressed left or right buttons to discriminate its movement direction (four times per trial). First and second operand magnitude (small/large), operation (addition/subtraction), result correctness (right/wrong) and movement direction (left/right) were balanced across 128 trials. Manual reaction times of dot movement discriminations were considered in relation to previous arithmetic elements. We found no evidence of early attentional shifts after first operand and operator presentation. Discrimination performance was modulated consistent with SNAs after the second operand, suggesting that attentional shifts occur once there is access to all elements necessary to complete an arithmetic operation. Such late-occurring attention shifts may reflect a combination of multiple element-specific biases and confirm their functional role in mental calculation.
The Quaternary volcanic fields of the Eifel (Rhineland-Palatinate, Germany) had their last eruptions less than 13,000 years ago. Recently, deep low-frequency (DLF) earthquakes were detected beneath one of the volcanic fields showing evidence of ongoing magmatic activity in the lower crust and upper mantle. In this work, seismic wide- and steep-angle experiments from 1978/1979 and 1987/1988 are compiled, partially reprocessed and interpreted, together with other data to better determine the location, size, shape, and state of magmatic reservoirs in the Eifel region near the crust-mantle boundary. We discuss seismic evidence for a low-velocity gradient layer from 30-36 km depth, which has developed over a large region under all Quaternary volcanic fields of the Rhenish Massif and can be explained by the presence of partial melts. We show that the DLF earthquakes connect the postulated upper mantle reservoir with the upper crust at a depth of about 8 km, directly below one of the youngest phonolitic volcanic centers in the Eifel, where CO(2)originating from the mantle is massively outgassing. A bright spot in the West Eifel between 6 and 10 km depth represents a Tertiary magma reservoir and is seen as a model for a differentiated reservoir beneath the young phonolitic center today. We find that the distribution of volcanic fields is controlled by the Variscan lithospheric structures and terrane boundaries as a whole, which is reflected by an offset of the Moho depth, a wedge-shaped transparent zone in the lower crust and the system of thrusts over about 120 km length.
Of city and village mice
(2020)
A fundamental question of current ecological research concerns the drives and limits of species responses to human-induced rapid environmental change (HIREC). Behavioural responses to HIREC are a key component because behaviour links individual responses to population and community changes. Ongoing fast urbanization provides an ideal setting to test the functional role of behaviour for responses to HIREC. Consistent behavioural differences between conspecifics (animal personality) may be important determinants or constraints of animals’ adaptation to urban habitats. We tested whether urban and rural populations of small mammals differ in mean trait expression, flexibility and repeatability of behaviours associated to risk-taking and exploratory tendencies. Using a standardized behavioural test in the field, we quantified spatial exploration and boldness of striped field mice (Apodemus agrarius, n = 96) from nine sub-populations, presenting different levels of urbanisation and anthropogenic disturbance. The level of urbanisation positively correlated with boldness, spatial exploration and behavioural flexibility, with urban dwellers being bolder, more explorative and more flexible in some traits than rural conspecifics. Thus, individuals seem to distribute in a non-random way in response to human disturbance based on their behavioural characteristics. Animal personality might therefore play a key role in successful coping with the challenges of HIREC.
Focusing on the phase-coexistence region in Langmuir films of poly(L-lactide), we investigated changes in nonequilibrated morphologies and the corresponding features of the isotherms induced by different experimental pathways of lateral compression and expansion. In this coexistence region, the surface pressure II was larger than the expected equilibrium value and was found to increase upon compression, i.e., exhibited a nonhorizontal plateau. As shown earlier by using microscopic techniques [Langmuir 2019, 35, 6129-6136], in this plateau region, well-ordered mesoscopic clusters coexisted with a surrounding matrix phase. We succeeded in reducing Pi either by slowing down the rate of compression or through increasing the waiting time after stopping the movement of the barriers, which allowed for relaxations in the coexistence region. Intriguingly, the most significant pressure reduction was observed when recompressing a film that had already been compressed and expanded, if the recompression was started from an area value smaller than the one anticipated for the onset of the coexistence region. This observation suggests a "self-seeding" behavior, i.e., pre-existing nuclei allowed to circumvent the nucleation step. The decrease in Pi was accompanied by a transformation of the initially formed metastable mesoscopic clusters into a thermodynamically favored filamentary morphology. Our results demonstrate that it is practically impossible to obtain fully equilibrated coexisting phases in a Langmuir polymer film, neither under conditions of extremely slow continuous compression nor for long waiting times at a constant area in the coexistence region which allow for reorganization.
Contemporary drought impact assessments have been constrained due to data availability, leading to an incomplete representation of impact trends. To address this, we present a novel method for the comprehensive and near-real-time monitoring of drought socio-economic impacts based on media reports. We tested its application using the case of the exceptional 2018/19 German drought. By employing text mining techniques, 4839 impact statements were identified, relating to livestock, agriculture, forestry, fires, recreation, energy and transport sectors. An accuracy of 95.6% was obtained for their automatic classification. Furthermore, high levels of performance in terms of spatial and temporal precision were found when validating our results against independent data (e.g. soil moisture, average precipitation, population interest in droughts, crop yield and forest fire statistics). The findings highlight the applicability of media data for rapidly and accurately monitoring the propagation of drought consequences over time and space. We anticipate our method to be used as a starting point for an impact-based early warning system.
The European river lamprey Lampetra fluviatilis and the European brook lamprey Lampetra planeri (Block 1784) are classified as a paired species, characterized by notably different life histories but morphological similarities. Previous work has further shown limited genetic differentiation between these two species at the mitochondrial DNA level. Here, we expand on this previous work, which focused on lamprey species from the Iberian Peninsula in the south and mainland Europe in the north, by sequencing three mitochondrial marker regions of Lampetra individuals from five river systems in Ireland and five in southern Italy. Our results corroborate the previously identified pattern of genetic diversity for the species pair. We also show significant genetic differentiation between Irish and mainland European lamprey populations, suggesting another ichthyogeographic district distinct from those previously defined. Finally, our results stress the importance of southern Italian L. planeri populations, which maintain several private alleles and notable genetic diversity.
This article merges theoretical literature on non-controlling minority shareholdings (NCMS) in a coherent model to study the effects of NCMS on competition and collusion. The model encompasses both the case of a common owner holding shares of rival firms as well as the case of cross ownership among rivals. We find that by softening competition, NCMS weaken the sustainability of collusion under a greater variety of situations than was indicated by earlier literature. Such effects exist, in particular, in the presence of an effective competition authority.
Several numerical tools designed to overcome the challenges of smoothing in a non-linear and non-Gaussian setting are investigated for a class of particle smoothers. The considered family of smoothers is induced by the class of linear ensemble transform filters which contains classical filters such as the stochastic ensemble Kalman filter, the ensemble square root filter, and the recently introduced nonlinear ensemble transform filter. Further the ensemble transform particle smoother is introduced and particularly highlighted as it is consistent in the particle limit and does not require assumptions with respect to the family of the posterior distribution. The linear update pattern of the considered class of linear ensemble transform smoothers allows one to implement important supplementary techniques such as adaptive spread corrections, hybrid formulations, and localization in order to facilitate their application to complex estimation problems. These additional features are derived and numerically investigated for a sequence of increasingly challenging test problems.
The objective of the study is to develop a better understanding of the capillary circulation in contracting muscles. Ten subjects were measured during a submaximal fatiguing isometric muscle action by use of the O2C spectrophotometer. In all measurements the capillary-venous oxygen saturation of hemoglobin (SvO2) decreases immediately after the start of loading and levels off into a steady state. However, two different patterns (type I and type II) emerged. They differ in the extent of deoxygenation (–10.37 ±2.59 percent points (pp) vs. –33.86 ±17.35 pp, P = .008) and the behavior of the relative hemoglobin amount (rHb). Type I reveals a positive rank correlation of SvO2 and rHb (? = 0.735, P <.001), whereas a negative rank correlation (? = –0.522, P <.001) occurred in type II, since rHb decreases until a reversal point, then increases averagely 13% above the baseline value and levels off into a steady state. The results reveal that a homeostasis of oxygen delivery and consumption during isometric muscle actions is possible. A rough distinction in two types of regulation is suggested.
Digital transformation (DT) is a major challenge for traditional companies. Despite the term, DT is relatively new; its substance is not: a whole stream of research has examined the relationship between DT and firm performance with contradictory findings. Most of these studies have chosen a linear correlational approach, however, did not analyze the holistic interplay of DT dimensions, leading to firm performance. This applies especially to the mature financial services industry and the future perspectives of traditional financial service providers (FSP). Hence, it remains an open question for both research and practice what DT configurations have a positive impact on firm performance. Against this background, the aim of this exploratory study is to examine how DT dimensions are systemically connected to firm performance of incumbent FSP. Drawing on a qualitative-empirical research approach with case data from 83 FSP, we identify digital configurations along different levels of firm performance. Our findings suggest an evolution of digital configurations of FSP, leading to five empirical standard types from which only one managed to establish a profound basis of DT.
Partial clones
(2020)
A set C of operations defined on a nonempty set A is said to be a clone if C is closed under composition of operations and contains all projection mappings. The concept of a clone belongs to the algebraic main concepts and has important applications in Computer Science. A clone can also be regarded as a many-sorted algebra where the sorts are the n-ary operations defined on set A for all natural numbers n >= 1 and the operations are the so-called superposition operations S-m(n) for natural numbers m, n >= 1 and the projection operations as nullary operations. Clones generalize monoids of transformations defined on set A and satisfy three clone axioms. The most important axiom is the superassociative law, a generalization of the associative law. If the superposition operations are partial, i.e. not everywhere defined, instead of the many-sorted clone algebra, one obtains partial many-sorted algebras, the partial clones. Linear terms, linear tree languages or linear formulas form partial clones. In this paper, we give a survey on partial clones and their properties.
Recruitment of mesenchymal stem cells (MSCs) to damaged tissue is a crucial step to modulate tissue regeneration. Here, the migration of human adipose-derived stem cells (hADSCs) responding to thermal and mechanical stimuli was investigated using programmable shape-memory polymer actuator (SMPA) sheets. Changing the temperature repetitively between 10 and 37 degrees C, the SMPA sheets are capable of reversibly changing between two different pre-defined shapes like an artificial muscle. Compared to non-actuating sheets, the cells cultured on the programmed actuating sheets presented a higher migration velocity (0.32 +/- 0.1 vs. 0.57 +/- 0.2 mu m/min). These results could motivate the next scientific steps, for example, to investigate the MSCs pre-loaded in organoids towards their migration potential.
Stem cells are capable of sensing and processing environmental inputs, converting this information to output a specific cell lineage through signaling cascades. Despite the combinatorial nature of mechanical, thermal, and biochemical signals, these stimuli have typically been decoupled and applied independently, requiring continuous regulation by controlling units. We employ a programmable polymer actuator sheet to autonomously synchronize thermal and mechanical signals applied to mesenchymal stem cells (MSC5). Using a grid on its underside, the shape change of polymer sheet, as well as cell morphology, calcium (Ca2+) influx, and focal adhesion assembly, could be visualized and quantified. This paper gives compelling evidence that the temperature sensing and mechanosensing of MSC5 are interconnected via intracellular Ca2+. Up-regulated Ca2+ levels lead to a remarkable alteration of histone H3K9 acetylation and activation of osteogenic related genes. The interplay of physical, thermal, and biochemical signaling was utilized to accelerate the cell differentiation toward osteogenic lineage. The approach of programmable bioinstructivity provides a fundamental principle for functional biomaterials exhibiting multifaceted stimuli on differentiation programs. Technological impact is expected in the tissue engineering of periosteum for treating bone defects.
Background
Parasitoid wasps have fascinating life cycles and play an important role in trophic networks, yet little is known about their genome content and function. Parasitoids that infect aphids are an important group with the potential for biological control. Their success depends on adapting to develop inside aphids and overcoming both host aphid defenses and their protective endosymbionts.
Results
We present the de novo genome assemblies, detailed annotation, and comparative analysis of two closely related parasitoid wasps that target pest aphids: Aphidius ervi and Lysiphlebus fabarum (Hymenoptera: Braconidae: Aphidiinae). The genomes are small (139 and 141 Mbp) and the most AT-rich reported thus far for any arthropod (GC content: 25.8 and 23.8%). This nucleotide bias is accompanied by skewed codon usage and is stronger in genes with adult-biased expression. AT-richness may be the consequence of reduced genome size, a near absence of DNA methylation, and energy efficiency. We identify missing desaturase genes, whose absence may underlie mimicry in the cuticular hydrocarbon profile of L. fabarum. We highlight key gene groups including those underlying venom composition, chemosensory perception, and sex determination, as well as potential losses in immune pathway genes.
Conclusions
These findings are of fundamental interest for insect evolution and biological control applications. They provide a strong foundation for further functional studies into coevolution between parasitoids and their hosts. Both genomes are available at https://bipaa.genouest.org.
The transfer of Microcystis aeruginosa from freshwater to estuaries has been described worldwide and salinity is reported as the main factor controlling the expansion of M. aeruginosa to coastal environments. Analyzing the expression levels of targeted genes and employing both targeted and non-targeted metabolomic approaches, this study investigated the effect of a sudden salt increase on the physiological and metabolic responses of two toxic M. aeruginosa strains separately isolated from fresh and brackish waters, respectively, PCC 7820 and 7806. Supported by differences in gene expressions and metabolic profiles, salt tolerance was found to be strain specific. An increase in salinity decreased the growth of M. aeruginosa with a lesser impact on the brackish strain. The production of intracellular microcystin variants in response to salt stress correlated well to the growth rate for both strains. Furthermore, the release of microcystins into the surrounding medium only occurred at the highest salinity treatment when cell lysis occurred. This study suggests that the physiological responses of M. aeruginosa involve the accumulation of common metabolites but that the intraspecific salt tolerance is based on the accumulation of specific metabolites. While one of these was determined to be sucrose, many others remain to be identified. Taken together, these results provide evidence that M. aeruginosa is relatively salt tolerant in the mesohaline zone and microcystin (MC) release only occurs when the capacity of the cells to deal with salt increase is exceeded.
Background: The enzyme-linked immunosorbent assay (ELISA) is an indispensable tool for clinical diagnostics to identify or differentiate diseases such as autoimmune illnesses, but also to monitor their progression or control the efficacy of drugs. One use case of ELISA is to differentiate between different states (e.g. healthy vs. diseased). Another goal is to quantitatively assess the biomarker in question, like autoantibodies. Thus, the ELISA technology is used for the discovery and verification of new autoantibodies, too. Of key interest, however, is the development of immunoassays for the sensitive and specific detection of such biomarkers at early disease stages. Therefore, users have to deal with many parameters, such as buffer systems or antigen-autoantibody interactions, to successfully establish an ELISA. Often, fine-tuning like testing of several blocking substances is performed to yield high signal-to-noise ratios. <br /> Methods: We developed an ELISA to detect IgA and IgG autoantibodies against chitinase-3-like protein 1 (CHI3L1), a newly identified autoantigen in inflammatory bowel disease (IBD), in the serum of control and disease groups (n = 23, respectively). Microwell plates with different surface modifications (PolySorp and MaxiSorp coating) were tested to detect reproducibility problems. <br /> Results: We found a significant impact of the surface properties of the microwell plates. IgA antibody reactivity was significantly lower, since it was in the range of background noise, when measured on MaxiSorp coated plates (p < 0.0001). The IgG antibody reactivity did not differ on the diverse plates, but the plate surface had a significant influence on the test result (p = 0.0005). <br /> Conclusion: With this report, we want to draw readers' attention to the properties of solid phases and their effects on the detection of autoantibodies by ELISA. We want to sensitize the reader to the fact that the choice of the wrong plate can lead to a false negative test result, which in turn has serious consequences for the discovery of autoantibodies.
A comprehensive photometric and spectroscopic analysis of the variable TYC 5532-1333-1 (TYC) along with an investigation of its orbital period variation is presented for the first time. The B- and V-band photometric study indicates that TYC is an intermediate contact binary with degree of contact and mass ratio of 34 per cent and similar to 0.24, respectively. The derived equivalent widths from the spectroscopic study of H alpha and Na-I lines reveal phase-dependent variation and mutual correlation. Using the available times of minimum light, an investigation of orbital period variation shows a long-term decrease at a rate of 3.98 x 10(-6) d yr(-1). Expected causes for such decline in the orbital period could be angular momentum loss and a quasi-sinusoidal variation due to light-time effect probably caused by a third-body companion. The minimum mass of the third body (M-3) was derived to be 0.65 M-circle dot. Our presented study is an attempt to evaluate and understand the evolutionary state of above-mentioned neglected contact binary.
Landscapes in high northern latitudes are assumed to be highly sensitive to future global change, but the rates and long-term trajectories of changes are rather uncertain. In the boreal zone, fires are an important factor in climate-vegetation interactions and biogeochemical cycles. Fire regimes are characterized by small, frequent, low-intensity fires within summergreen boreal forests dominated by larch, whereas evergreen boreal forests dominated by spruce and pine burn large areas less frequently but at higher intensities. Here, we explore the potential of the monosaccharide anhydrides (MA) levoglucosan, mannosan and galactosan to serve as proxies of low-intensity biomass burning in glacial-to-interglacial lake sediments from the high northern latitudes. We use sediments from Lake El'gygytgyn (cores PG 1351 and ICDP 5011-1), located in the far north-east of Russia, and study glacial and interglacial samples of the last 430 kyr (marine isotope stages 5e, 6, 7e, 8, 11c and 12) that had different climate and biome configurations. Combined with pollen and non-pollen palynomorph records from the same samples, we assess how far the modern relationships between fire, climate and vegetation persisted during the past, on orbital to centennial timescales. We find that MAs attached to particulates were well-preserved in up to 430 kyr old sediments with higher influxes from low-intensity biomass burning in interglacials compared to glacials. MA influxes significantly increase when summergreen boreal forest spreads closer to the lake, whereas they decrease when tundra-steppe environments and, especially, Sphagnum peatlands spread. This suggests that low-temperature fires are a typical characteristic of Siberian larch forests also on long timescales. The results also suggest that low-intensity fires would be reduced by vegetation shifts towards very dry environments due to reduced biomass availability, as well as by shifts towards peatlands, which limits fuel dryness. In addition, we observed very low MA ratios, which we interpret as high contributions of galactosan and mannosan from biomass sources other than those currently monitored, such as the moss-lichen mats in the understorey of the summergreen boreal forest. Overall, sedimentary MAs can provide a powerful proxy for fire regime reconstructions and extend our knowledge of long-term natural fire-climate-vegetation feedbacks in the high northern latitudes.
Cloud model inversions of strong chromospheric absorption lines using principal component analysis
(2020)
High-resolution spectroscopy of strong chromospheric absorption lines delivers nowadays several millions of spectra per observing day, when using fast scanning devices to cover large regions on the solar surface. Therefore, fast and robust inversion schemes are needed to explore the large data volume. Cloud model (CM) inversions of the chromospheric H alpha line are commonly employed to investigate various solar features including filaments, prominences, surges, jets, mottles, and (macro-) spicules. The choice of the CM was governed by its intuitive description of complex chromospheric structures as clouds suspended above the solar surface by magnetic fields. This study is based on observations of active region NOAA 11126 in H alpha, which were obtained November 18-23, 2010 with the echelle spectrograph of the vacuum tower telescope at the Observatorio del Teide, Spain. Principal component analysis reduces the dimensionality of spectra and conditions noise-stripped spectra for CM inversions. Modeled H alpha intensity and contrast profiles as well as CM parameters are collected in a database, which facilitates efficient processing of the observed spectra. Physical maps are computed representing the line-core and continuum intensity, absolute contrast, equivalent width, and Doppler velocities, among others. Noise-free spectra expedite the analysis of bisectors. The data processing is evaluated in the context of "big data," in particular with respect to automatic classification of spectra.
Climate change heavily impacts smallholder farming worldwide. Cross-scale vulnerability assessment has a high potential to identify nested measures for reducing vulnerability of smallholder farmers. Despite their high practical value, there are currently only limited examples of cross-scale assessments. The presented study aims at assessing the vulnerability of smallholder farmers in the Northeast of Brazil across three scales: regional, farm and field scale. In doing so, it builds on existing vulnerability indices and compares results between indices at the same scale and across scales. In total, six independent indices are tested, two at each scale. The calculated indices include social, economic and ecological indicators, based on municipal statistics, meteorological data, farm interviews and soil analyses. Subsequently, indices and overlapping indicators are normalized for intra- and cross-scale comparison. The results show considerable differences between indices across and within scales. They indicate different activities to reduce vulnerability of smallholder farmers. Major shortcomings arise from the conceptual differences between the indices. We therefore recommend the development of hierarchical indices, which are adapted to local conditions and contain more overlapping indicators for a better understanding of the nested vulnerabilities of smallholder farmers.
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics that create a probabilistic model of the solution space, which is updated iteratively, based on the quality of the solutions sampled according to the model. As previous works show, this iteration-based perspective can lead to erratic updates of the model, in particular, to bit-frequencies approaching a random boundary value. In order to overcome this problem, we propose a new EDA based on the classic compact genetic algorithm (cGA) that takes into account a longer history of samples and updates its model only with respect to information which it classifies as statistically significant. We prove that this significance-based cGA (sig-cGA) optimizes the commonly regarded benchmark functions OneMax (OM), LeadingOnes, and BinVal all in quasilinear time, a result shown for no other EDA or evolutionary algorithm so far. For the recently proposed stable compact genetic algorithm-an EDA that tries to prevent erratic model updates by imposing a bias to the uniformly distributed model-we prove that it optimizes OM only in a time exponential in its hypothetical population size. Similarly, we show that the convex search algorithm cannot optimize OM in polynomial time.
Multiplicative Up-Drift
(2020)
Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time). So far, drift arguments have been successfully employed in the rigorous analysis of evolutionary algorithms, however, only for the situation that the progress is constant or becomes weaker when approaching the target. Motivated by questions like how fast fit individuals take over a population, we analyze random processes exhibiting a (1+delta)-multiplicative growth in expectation. We prove a drift theorem translating this expected progress into a hitting time. This drift theorem gives a simple and insightful proof of the level-based theorem first proposed by Lehre (2011). Our version of this theorem has, for the first time, the best-possible near-linear dependence on 1/delta} (the previous results had an at least near-quadratic dependence), and it only requires a population size near-linear in delta (this was super-quadratic in previous results). These improvements immediately lead to stronger run time guarantees for a number of applications. We also discuss the case of large delta and show stronger results for this setting.
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations. A naturally occurring problem is that of bloat, that is, the unnecessary growth of solution lengths, which may slow down the optimization process. So far, the mathematical runtime analysis could not deal well with bloat and required explicit assumptions limiting bloat.
In this paper, we provide the first mathematical runtime analysis of a GP algorithm that does not require any assumptions on the bloat. Previous performance guarantees were only proven conditionally for runs in which no strong bloat occurs. Together with improved analyses for the case with bloat restrictions our results show that such assumptions on the bloat are not necessary and that the algorithm is efficient without explicit bloat control mechanism.
More specifically, we analyzed the performance of the (1 + 1) GP on the two benchmark functions ORDER and MAJORITY. When using lexicographic parsimony pressure as bloat control, we show a tight runtime estimate of O(T-init + nlogn) iterations both for ORDER and MAJORITY. For the case without bloat control, the bounds O(T-init logT(i)(nit) + n(logn)(3)) and Omega(T-init + nlogn) (and Omega(T-init log T-init) for n = 1) hold for MAJORITY(1).
In his seminal works, Endel Tulving argued that functionally distinct memory systems give rise to subjective experiences of remembering and knowing (i.e., recollection- vs. familiarity-based memory, respectively). Evidence shows that emotion specifically enhances recollection, and this effect is subserved by a synergistic mechanism involving the amygdala (AMY) and hippocampus (HC). In extreme circumstances, however, uncontrolled recollection of highly distressing memories may lead to symptoms of affective disorders. Therefore, it is important to understand the factors that can diminish such detrimental effects. Here, we investigated the effects of Focused Attention (FA) on emotional recollection. FA is an emotion regulation strategy that has been proven quite effective in reducing the impact of emotional responses associated with the recollection of distressing autobiographical memories, but its impact during emotional memory encoding is not known. Functional MRI and eye-tracking data were recorded while participants viewed a series of composite negative and neutral images with distinguishable foreground (FG) and background (BG) areas. Participants were instructed to focus either on the FG or BG content of the images and to rate their emotional responses. About 4 days later, participants' memory was assessed using the R/K procedure, to indicate whether they Recollected specific contextual details about the encoded images or the images were just familiar to them - i.e., participants only Knew that they saw the pictures without being able to remember specific contextual details. First, results revealed that FA was successful in decreasing memory for emotional pictures viewed in BG Focus condition, and this effect was driven by recollection-based retrieval. Second, the BG Focus condition was associated with decreased activity in the AMY, HC, and anterior parahippocampal gyrus for subsequently recollected emotional items. Moreover, correlation analyses also showed that reduced activity in these regions predicted greater reduction in emotional recollection following FA. These results demonstrate the effectiveness of FA in mitigating emotional experiences and emotional recollection associated with unpleasant emotional events.
Comparing mitogenomic timetrees for two African savannah primate genera (Chlorocebus and Papio)
(2020)
Sedimentary ancient DNA has been proposed as a key methodology for reconstructing biodiversity over time. Yet, despite the concentration of Earth’s biodiversity in the tropics, this method has rarely been applied in this region. Moreover, the taphonomy of sedimentary DNA, especially in tropical environments, is poorly understood. This study elucidates challenges and opportunities of sedimentary ancient DNA approaches for reconstructing tropical biodiversity. We present shotgun-sequenced metagenomic profiles and DNA degradation patterns from multiple sediment cores from Mubwindi Swamp, located in Bwindi Impenetrable Forest (Uganda), one of the most diverse forests in Africa. We describe the taxonomic composition of the sediments covering the past 2200 years and compare the sedimentary DNA data with a comprehensive set of environmental and sedimentological parameters to unravel the conditions of DNA degradation. Consistent with the preservation of authentic ancient DNA in tropical swamp sediments, DNA concentration and mean fragment length declined exponentially with age and depth, while terminal deamination increased with age. DNA preservation patterns cannot be explained by any environmental parameter alone, but age seems to be the primary driver of DNA degradation in the swamp. Besides degradation, the presence of living microbial communities in the sediment also affects DNA quantity. Critically, 92.3% of our metagenomic data of a total 81.8 million unique, merged reads cannot be taxonomically identified due to the absence of genomic references in public databases. Of the remaining 7.7%, most of the data (93.0%) derive from Bacteria and Archaea, whereas only 0–5.8% are from Metazoa and 0–6.9% from Viridiplantae, in part due to unbalanced taxa representation in the reference data. The plant DNA record at ordinal level agrees well with local pollen data but resolves less diversity. Our animal DNA record reveals the presence of 41 native taxa (16 orders) including Afrotheria, Carnivora, and Ruminantia at Bwindi during the past 2200 years. Overall, we observe no decline in taxonomic richness with increasing age suggesting that several-thousand-year-old information on past biodiversity can be retrieved from tropical sediments. However, comprehensive genomic surveys of tropical biota need prioritization for sedimentary DNA to be a viable methodology for future tropical biodiversity studies.
Cell-free protein synthesis
(2020)
Proteins are the main source of drug targets and some of them possess therapeutic potential themselves. Among them, membrane proteins constitute approximately 50% of the major drug targets. In the drug discovery pipeline, rapid methods for producing different classes of proteins in a simple manner with high quality are important for structural and functional analysis. Cell-free systems are emerging as an attractive alternative for the production of proteins due to their flexible nature without any cell membrane constraints. In a bioproduction context, open systems based on cell lysates derived from different sources, and with batch-to-batch consistency, have acted as a catalyst for cell-free synthesis of target proteins. Most importantly, proteins can be processed for downstream applications like purification and functional analysis without the necessity of transfection, selection, and expansion of clones. In the last 5 years, there has been an increased availability of new cell-free lysates derived from multiple organisms, and their use for the synthesis of a diverse range of proteins. Despite this progress, major challenges still exist in terms of scalability, cost effectiveness, protein folding, and functionality. In this review, we present an overview of different cell-free systems derived from diverse sources and their application in the production of a wide spectrum of proteins. Further, this article discusses some recent progress in cell-free systems derived from Chinese hamster ovary and Sf21 lysates containing endogenous translocationally active microsomes for the synthesis of membrane proteins. We particularly highlight the usage of internal ribosomal entry site sequences for more efficient protein production, and also the significance of site-specific incorporation of non-canonical amino acids for labeling applications and creation of antibody drug conjugates using cell-free systems. We also discuss strategies to overcome the major challenges involved in commercializing cell-free platforms from a laboratory level for future drug development.
Technological advancements are giving rise to the fourth industrial revolution - Industry 4.0 -characterized by the mass employment of smart objects in highly reconfigurable and thoroughly connected industrialproduct-service systems. The purpose of this paper is to propose a theory-based knowledgedynamics model in the smart grid scenario that would provide a holistic view on the knowledge-based interactions among smart objects, humans, and other actors as an underlyingmechanism of value co-creation in Industry 4.0. A multi-loop and three-layer - physical, virtual, and interface - model of knowledge dynamics is developedby building on the concept of ba - an enabling space for interactions and theemergence of knowledge. The model depicts how big data analytics are just one component inunlocking the value of big data, whereas the tacit engagement of humans-in-the-loop - theirsense-making and decision-making - is needed for insights to be evoked fromanalytics reports and customer needs to be met.
In recent years, increasing concerns have been raised about the environmental risk of microplastics in freshwater ecosystems. Small microplastics enter the water either directly or accumulate through disintegration of larger plastic particles. These particles might then be ingested by filter-feeding zooplankton, such as rotifers. Particles released into the water may also interact with the biota through the formation of aggregates, which might alter the uptake by zooplankton. In this study, we tested for size-specific aggregation of polystyrene microspheres and their ingestion by a common freshwater rotifer Brachionus calyciflorus. The ingestion of three sizes of polystyrene microspheres (MS) 1-, 3-, and 6-mu m was investigated. Each MS size was tested in combination with three different treatments: MS as the sole food intake, MS in association with food algae and MS aggregated with biogenic matter. After 72 h incubation in pre-filtered natural river water, the majority of the 1-mu m spheres occurred as aggregates. The larger the particles, the higher the relative number of single particles and the larger the aggregates. All particles were ingested by the rotifer following a Type-II functional response. The presence of algae did not influence the ingestion of the MS for all three sizes. The biogenic aggregation of microspheres led to a significant size-dependent alteration in their ingestion. Rotifers ingested more microspheres (MS) when exposed to aggregated 1- and 3-mu m MS as compared to single spheres, whereas fewer aggregated 6-mu m spheres were ingested. This indicates that the small particles when aggregated were in an effective size range for Brachionus, while the aggregated larger spheres became too large to be efficiently ingested. These observations provide the first evidence of a size- and aggregation-dependent feeding interaction between microplastics and rotifers. Microplastics when aggregated with biogenic particles in a natural environment can rapidly change their size-dependent availability. The aggregation properties of microplastics should be taken into account when performing experiments mimicking the natural environment.
TPC-H continues to be the most widely used benchmark for relational OLAP systems. It poses a number of challenges, also known as "choke points", which database systems have to solve in order to achieve good benchmark results. Examples include joins across multiple tables, correlated subqueries, and correlations within the TPC-H data set. Knowing the impact of such optimizations helps in developing optimizers as well as in interpreting TPC-H results across database systems.
This paper provides a systematic analysis of choke points and their optimizations. It complements previous work on TPC-H choke points by providing a quantitative discussion of their relevance. It focuses on eleven choke points where the optimizations are beneficial independently of the database system. Of these, the flattening of subqueries and the placement of predicates have the biggest impact. Three queries (Q2, Q17, and Q21) are strongly ifluenced by the choice of an efficient query plan; three others (Q1, Q13, and Q18) are less influenced by plan optimizations and more dependent on an efficient execution engine.
Natural earthquakes often have very few observable foreshocks which significantly complicates tracking potential preparatory processes. To better characterize expected preparatory processes before failures, we study stick-slip events in a series of triaxial compression tests on faulted Westerly granite samples. We focus on the influence of fault roughness on the duration and magnitude of recordable precursors before large stick-slip failure. Rupture preparation in the experiments is detectable over long time scales and involves acoustic emission (AE) and aseismic deformation events. Preparatory fault slip is found to be accelerating during the entire pre-failure loading period, and is accompanied by increasing AE rates punctuated by distinct activity spikes associated with large slip events. Damage evolution across the fault zones and surrounding wall rocks is manifested by precursory decrease of seismic b-values and spatial correlation dimensions. Peaks in spatial event correlation suggest that large slip initiation occurs by failure of multiple asperities. Shear strain estimated from AE data represents only a small fraction (< 1%) of total shear strain accumulated during the preparation phase, implying that most precursory deformation is aseismic. The relative contribution of aseismic deformation is amplified by larger fault roughness. Similarly, seismic coupling is larger for smooth saw-cut faults compared to rough faults. The laboratory observations point towards a long-lasting and continuous preparation process leading to failure and large seismic events. The strain partitioning between aseismic and observable seismic signatures depends on fault structure and instrument resolution.
Formate dehydrogenase (FDH) enzymes are versatile catalysts for CO2 conversion. The FDH from Rhodobacter capsulatus contains a molybdenum cofactor with the dithiolene functions of two pyranopterin guanine dinucleotide molecules, a conserved cysteine, and a sulfido group bound at Mo(VI). In this study, we focused on metal oxidation state and coordination changes in response to exposure to O-2, inhibitory anions, and redox agents using X-ray absorption spectroscopy (XAS) at the Mo K-edge. Differences in the oxidative modification of the bis-molybdopterin guanine dinucleotide (bis-MGD) cofactor relative to samples prepared aerobically without inhibitor, such as variations in the relative numbers of sulfido (Mo=S) and oxo (Mo=O) bonds, were observed in the presence of azide (N-3(-)) or cyanate (OCN-). Azide provided best protection against O-2, resulting in a quantitatively sulfurated cofactor with a displaced cysteine ligand and optimized formate oxidation activity. Replacement of the cysteine ligand by a formate (HCO2-) ligand at the molybdenum in active enzyme is compatible with our XAS data. Cyanide (CN-) inactivated the enzyme by replacing the sulfido ligand at Mo(VI) with an oxo ligand. Evidence that the sulfido group may become protonated upon molybdenum reduction was obtained. Our results emphasize the role of coordination flexibility at the molybdenum center during inhibitory and catalytic processes of FDH enzymes.