Refine
Year of publication
Document Type
- Postprint (481)
- Article (55)
- Doctoral Thesis (5)
- Other (1)
- Preprint (1)
Keywords
- climate-change (16)
- model (16)
- climate (13)
- variability (9)
- evolution (8)
- precipitation (7)
- transport (7)
- Model (6)
- adaptation (6)
- ancient DNA (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (543) (remove)
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Restoration of semi-natural grassland communities
involves a combination of (1) sward disturbance to
create a temporal window for establishment, and (2)
target species introduction, the latter usually by seed
sowing. With great regularity, particular species
establish only poorly. More reliable establishment
could improve outcome of restoration projects and
increase cost-effectiveness. We investigated the
abiotic germination niche of ten poorly establishing
calcareous grassland species by simultaneously
exploring the effects of moisture and light availability
and temperature fluctuation on percentage germina-
tion and speed of germination. We also investigated
the effects of three different pre-treatments used to
enhance seed germination – cold-stratification, osmo-
tic priming and priming in combination with gibberellic
acid (GA 3 ) – and how these affected abiotic
germination niches. Species varied markedly in width
of abiotic germination niche, ranging from Carex flacca
with very strict abiotic requirements, to several species
reliably germinating across the whole range of abiotic
conditions. Our results suggest pronounced differ-
ences between species in gap requirements for
establishment. Germination was improved in most
species by at least one pre-treatment. Evidence for
positive effects of adding GA 3 to seed priming
solutions was limited. In several species, pre-treated
seeds germinated under a wider range of abiotic
conditions than untreated seeds. Improved knowledge
of species-specific germination niches and the effects
of seed pre-treatments may help to improve species
establishment by sowing, and to identify species for
which sowing at a later stage of restoration or
introduction as small plants may represent a more
viable strategy.
Lake Towuti is a tectonic basin, surrounded by ultramafic rocks. Lateritic soils form through weathering and deliver abundant iron (oxy)hydroxides but very little sulfate to the lake and its sediment. To characterize the sediment biogeochemistry, we collected cores at three sites with increasing water depth and decreasing bottom water oxygen concentrations. Microbial cell densities were highest at the shallow site a feature we attribute to the availability of labile organic matter (OM) and the higher abundance of electron acceptors due to oxic bottom water conditions. At the two other sites, OM degradation and reduction processes below the oxycline led to partial electron acceptor depletion. Genetic information preserved in the sediment as extracellular DNA (eDNA) provided information on aerobic and anaerobic heterotrophs related to Nitrospirae. Chloroflexi, and Therrnoplasmatales. These taxa apparently played a significant role in the degradation of sinking OM. However, eDNA concentrations rapidly decreased with core depth. Despite very low sulfate concentrations, sulfate-reducing bacteria were present and viable in sediments at all three sites, as confirmed by measurement of potential sulfate reduction rates. Microbial community fingerprinting supported the presence of taxa related to Deltaproteobacteria and Firmicutes with demonstrated capacity for iron and sulfate reduction. Concomitantly, sequences of Ruminococcaceae, Clostridiales, and Methanornicrobiales indicated potential for fermentative hydrogen and methane production. Such first insights into ferruginous sediments showed that microbial populations perform successive metabolisms related to sulfur, iron, and methane. In theory, iron reduction could reoxidize reduced sulfur compounds and desorb OM from iron minerals to allow remineralization to methane. Overall, we found that biogeochemical processes in the sediments can be linked to redox differences in the bottom waters of the three sites, like oxidant concentrations and the supply of labile OM. At the scale of the lacustrine record, our geomicrobiological study should provide a means to link the extant subsurface biosphere to past environments.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of −5 to −17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
Introduction
Genes involved in body weight regulation that were previously investigated in genome-wide association studies (GWAS) and in animal models were target-enriched followed by massive parallel next generation sequencing.
Methods
We enriched and re-sequenced continuous genomic regions comprising FTO, MC4R, TMEM18, SDCCAG8, TKNS, MSRA and TBC1D1 in a screening sample of 196 extremely obese children and adolescents with age and sex specific body mass index (BMI) >= 99th percentile and 176 lean adults (BMI <= 15th percentile). 22 variants were confirmed by Sanger sequencing. Genotyping was performed in up to 705 independent obesity trios (extremely obese child and both parents), 243 extremely obese cases and 261 lean adults.
Results and Conclusion
We detected 20 different non-synonymous variants, one frame shift and one nonsense mutation in the 7 continuous genomic regions in study groups of different weight extremes. For SNP Arg695Cys (rs58983546) in TBC1D1 we detected nominal association with obesity (p(TDT) = 0.03 in 705 trios). Eleven of the variants were rare, thus were only detected heterozygously in up to ten individual(s) of the complete screening sample of 372 individuals. Two of them (in FTO and MSRA) were found in lean individuals, nine in extremely obese. In silico analyses of the 11 variants did not reveal functional implications for the mutations. Concordant with our hypothesis we detected a rare variant that potentially leads to loss of FTO function in a lean individual. For TBC1D1, in contrary to our hypothesis, the loss of function variant (Arg443Stop) was found in an obese individual. Functional in vitro studies are warranted.
Setting the PAS, the role of circadian PAS domain proteins during environmental adaptation in plants
(2015)
The per-ARNT-sim (PAS) domain represents an ancient protein module that can be found across all kingdoms of life. The domain functions as a sensing unit for a diverse array of signals, including molecular oxygen, small metabolites, and light. In plants, several PAS domain-containing proteins form an integral part of the circadian clock and regulate responses to environmental change. Moreover, these proteins function in pathways that control development and plant stress adaptation responses. Here, we discuss the role of PAS domain-containing proteins in anticipation, and adaptation to environmental changes in plants.
Recent studies have claimed the existence of very massive stars (VMS) up to 300 M⊙ in the local Universe. As this finding may represent a paradigm shift for the canonical stellar upper-mass limit of 150 M⊙, it is timely to discuss the status of the data, as well as the far-reaching implications of such objects. We held a Joint Discussion at the General Assembly in Beijing to discuss (i) the determination of the current masses of the most massive stars, (ii) the formation of VMS, (iii) their mass loss, and (iv) their evolution and final fate. The prime aim was to reach broad consensus between observers and theorists on how to identify and quantify the dominant physical processes.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.
In this combined theoretical and experimental study we report a full analysis of the resonant inelastic X-ray scattering (RIXS) spectra of H2O, D2O and HDO. We demonstrate that electronically-elastic RIXS has an inherent capability to map the potential energy surface and to perform vibrational analysis of the electronic ground state in multimode systems. We show that the control and selection of vibrational excitation can be performed by tuning the X-ray frequency across core-excited molecular bands and that this is clearly reflected in the RIXS spectra. Using high level ab initio electronic structure and quantum nuclear wave packet calculations together with high resolution RIXS measurements, we discuss in detail the mode coupling, mode localization and anharmonicity in the studied systems.
Reversed predator
(2018)
Ecoevolutionary feedbacks in predator–prey systems have been shown to qualitatively alter predator–prey dynamics. As a striking example, defense–offense coevolution can reverse predator–prey cycles, so predator peaks precede prey peaks rather than vice versa. However, this has only rarely been shown in either model studies or empirical systems. Here, we investigate whether this rarity is a fundamental feature of reversed cycles by exploring under which conditions they should be found. For this, we first identify potential conditions and parameter ranges most likely to result in reversed cycles by developing a new measure, the effective prey biomass, which combines prey biomass with prey and predator traits, and represents the prey biomass as perceived by the predator. We show that predator dynamics always follow the dynamics of the effective prey biomass with a classic ¼‐phase lag. From this key insight, it follows that in reversed cycles (i.e., ¾‐lag), the dynamics of the actual and the effective prey biomass must be in antiphase with each other, that is, the effective prey biomass must be highest when actual prey biomass is lowest, and vice versa. Based on this, we predict that reversed cycles should be found mainly when oscillations in actual prey biomass are small and thus have limited impact on the dynamics of the effective prey biomass, which are mainly driven by trait changes. We then confirm this prediction using numerical simulations of a coevolutionary predator–prey system, varying the amplitude of the oscillations in prey biomass: Reversed cycles are consistently associated with regions of parameter space leading to small‐amplitude prey oscillations, offering a specific and highly testable prediction for conditions under which reversed cycles should occur in natural systems.
Reversed predator
(2018)
Ecoevolutionary feedbacks in predator–prey systems have been shown to qualitatively alter predator–prey dynamics. As a striking example, defense–offense coevolution can reverse predator–prey cycles, so predator peaks precede prey peaks rather than vice versa. However, this has only rarely been shown in either model studies or empirical systems. Here, we investigate whether this rarity is a fundamental feature of reversed cycles by exploring under which conditions they should be found. For this, we first identify potential conditions and parameter ranges most likely to result in reversed cycles by developing a new measure, the effective prey biomass, which combines prey biomass with prey and predator traits, and represents the prey biomass as perceived by the predator. We show that predator dynamics always follow the dynamics of the effective prey biomass with a classic ¼‐phase lag. From this key insight, it follows that in reversed cycles (i.e., ¾‐lag), the dynamics of the actual and the effective prey biomass must be in antiphase with each other, that is, the effective prey biomass must be highest when actual prey biomass is lowest, and vice versa. Based on this, we predict that reversed cycles should be found mainly when oscillations in actual prey biomass are small and thus have limited impact on the dynamics of the effective prey biomass, which are mainly driven by trait changes. We then confirm this prediction using numerical simulations of a coevolutionary predator–prey system, varying the amplitude of the oscillations in prey biomass: Reversed cycles are consistently associated with regions of parameter space leading to small‐amplitude prey oscillations, offering a specific and highly testable prediction for conditions under which reversed cycles should occur in natural systems.
Many organisms have developed defences to avoid predation by species at higher trophic levels. The capability of primary producers to defend themselves against herbivores affects their own survival, can modulate the strength of trophic cascades and changes rates of competitive exclusion in aquatic communities. Algal species are highly flexible in their morphology, growth form, biochemical composition and production of toxic and deterrent compounds. Several of these variable traits in phytoplankton have been interpreted as defence mechanisms against grazing. Zooplankton feed with differing success on various phytoplankton species, depending primarily on size, shape, cell wall structure and the production of toxins and deterrents. Chemical cues associated with (i) mechanical damage, (ii) herbivore presence and (iii) grazing are the main factors triggering induced defences in both marine and freshwater phytoplankton, but most studies have failed to disentangle the exact mechanism(s) governing defence induction in any particular species. Induced defences in phytoplankton include changes in morphology (e.g. the formation of spines, colonies and thicker cell walls), biochemistry (such as production of toxins, repellents) and in life history characteristics (formation of cysts, reduced recruitment rate). Our categorization of inducible defences in terms of the responsible induction mechanism provides guidance for future work, as hardly any of the available studies on marine or freshwater plankton have performed all the treatments that are required to pinpoint the actual cue(s) for induction. We discuss the ecology of inducible defences in marine and freshwater phytoplankton with a special focus on the mechanisms of induction, the types of defences, their costs and benefits, and their consequences at the community level.
TRAPID
(2013)
Transcriptome analysis through next-generation sequencing technologies allows the generation of detailed gene catalogs for non-model species, at the cost of new challenges with regards to computational requirements and bioinformatics expertise. Here, we present TRAPID, an online tool for the fast and efficient processing of assembled RNA-Seq transcriptome data, developed to mitigate these challenges. TRAPID offers high-throughput open reading frame detection, frameshift correction and includes a functional, comparative and phylogenetic toolbox, making use of 175 reference proteomes. Benchmarking and comparison against state-of-the-art transcript analysis tools reveals the efficiency and unique features of the TRAPID system. TRAPID is freely available at http://bioinformatics.psb.ugent.be/webtools/trapid/.
Hot localised charge carriers on the Si(111)-7×7 surface are modelled by small charged clusters. Such resonances induce non-local desorption, i.e. more than 10 nm away from the injection site, of chlorobenzene in scanning tunnelling microscope experiments. We used such a cluster model to characterise resonance localisation and vibrational activation for positive and negative resonances recently. In this work, we investigate to which extent the model depends on details of the used cluster or quantum chemistry methods and try to identify the smallest possible cluster suitable for a description of the neutral surface and the ion resonances. Furthermore, a detailed analysis for different chemisorption orientations is performed. While some properties, as estimates of the resonance energy or absolute values for atomic changes, show such a dependency, the main findings are very robust with respect to changes in the model and/or the chemisorption geometry.
A new micro/mesoporous hybrid clay nanocomposite prepared from kaolinite clay, Carica papaya seeds, and ZnCl2 via calcination in an inert atmosphere is presented. Regardless of the synthesis temperature, the specific surface area of the nanocomposite material is between ≈150 and 300 m2/g. The material contains both micro- and mesopores in roughly equal amounts. X-ray diffraction, infrared spectroscopy, and solid-state nuclear magnetic resonance spectroscopy suggest the formation of several new bonds in the materials upon reaction of the precursors, thus confirming the formation of a new hybrid material. Thermogravimetric analysis/differential thermal analysis and elemental analysis confirm the presence of carbonaceous matter. The new composite is stable up to 900 °C and is an efficient adsorbent for the removal of a water micropollutant, 4-nitrophenol, and a pathogen, E. coli, from an aqueous medium, suggesting applications in water remediation are feasible.
Climate change, along with socio-economic development, will increase the economic impacts of floods. While the factors that influence flood risk to private property have been extensively studied, the risk that natural disasters pose to public infrastructure and the resulting implications on public sector budgets, have received less attention. We address this gap by developing a two-staged model framework, which first assesses the flood risk to public infrastructure in Austria. Combining exposure and vulnerability information at the building level with inundation maps, we project an increase in riverine flood damage, which progressively burdens public budgets. Second, the risk estimates are integrated into an insurance model, which analyzes three different compensation arrangements in terms of the monetary burden they place on future governments' budgets and the respective volatility of payments. Formalized insurance compensation arrangements offer incentives for risk reduction measures, which lower the burden on public budgets by reducing the vulnerability of buildings that are exposed to flooding. They also significantly reduce the volatility of payments and thereby improve the predictability of flood damage expenditures. These features indicate that more formalized insurance arrangements are an improvement over the purely public compensation arrangement currently in place in Austria.
The complementary advantages of high-rate Global Positioning System (GPS) and accelerometer observations for measuring seismic ground motion have been recognised in previous research. Here we propose an approach of tight integration of GPS and accelerometer measurements. The baseline shifts of the accelerometer are introduced as unknown parameters and estimated by a random walk process in the Precise Point Positioning (PPP) solution. To demonstrate the performance of the new strategy, we carried out several experiments using collocated GPS and accelerometer. The experimental results show that the baseline shifts of the accelerometer are automatically corrected, and high precision coseismic information of strong ground motion can be obtained in real-time. Additionally, the convergence and precision of the PPP is improved by the combined solution.
Flooding is an imminent natural hazard threatening most river deltas, e.g. the Mekong Delta. An appropriate flood management is thus required for a sustainable development of the often densely populated regions. Recently, the traditional event-based hazard control shifted towards a risk management approach in many regions, driven by intensive research leading to new legal regulation on flood management. However, a large-scale flood risk assessment does not exist for the Mekong Delta. Particularly, flood risk to paddy rice cultivation, the most important economic activity in the delta, has not been performed yet. Therefore, the present study was developed to provide the very first insight into delta-scale flood damages and risks to rice cultivation. The flood hazard was quantified by probabilistic flood hazard maps of the whole delta using a bivariate extreme value statistics, synthetic flood hydrographs, and a large-scale hydraulic model. The flood risk to paddy rice was then quantified considering cropping calendars, rice phenology, and harvest times based on a time series of enhanced vegetation index (EVI) derived from MODIS satellite data, and a published rice flood damage function. The proposed concept provided flood risk maps to paddy rice for the Mekong Delta in terms of expected annual damage. The presented concept can be used as a blueprint for regions facing similar problems due to its generic approach. Furthermore, the changes in flood risk to paddy rice caused by changes in land use currently under discussion in the Mekong Delta were estimated. Two land-use scenarios either intensifying or reducing rice cropping were considered, and the changes in risk were presented in spatially explicit flood risk maps. The basic risk maps could serve as guidance for the authorities to develop spatially explicit flood management and mitigation plans for the delta. The land-use change risk maps could further be used for adaptive risk management plans and as a basis for a cost-benefit of the discussed land-use change scenarios. Additionally, the damage and risks maps may support the recently initiated agricultural insurance programme in Vietnam.
In two-dimensional reaction-diffusion systems, local curvature perturbations on traveling waves are typically damped out and vanish. However, if the inhibitor diffuses much faster than the activator, transversal instabilities can arise, leading from flat to folded, spatio-temporally modulated waves and to spreading spiral turbulence. Here, we propose a scheme to induce or inhibit these instabilities via a spatio-temporal feedback loop. In a piecewise-linear version of the FitzHugh-Nagumo model, transversal instabilities and spiral turbulence in the uncontrolled system are shown to be suppressed in the presence of control, thereby stabilizing plane wave propagation. Conversely, in numerical simulations with the modified Oregonator model for the photosensitive Belousov-Zhabotinsky reaction, which does not exhibit transversal instabilities on its own, we demonstrate the feasibility of inducing transversal instabilities and study the emerging wave patterns in a well-controlled manner.
Hazards and accessibility
(2018)
The assessment of natural hazards and risk has traditionally been built upon the estimation of threat maps, which are used to depict potential danger posed by a particular hazard throughout a given area. But when a hazard event strikes, infrastructure is a significant factor that can determine if the situation becomes a disaster. The vulnerability of the population in a region does not only depend on the area’s local threat, but also on the geographical accessibility of
the area. This makes threat maps by themselves insufficient for supporting real-time decision-making, especially for those tasks that involve the use of the road network, such as management of relief operations, aid distribution, or planning of evacuation routes, among others. To overcome this problem, this paper proposes a multidisciplinary approach divided in two parts. First, data fusion of satellite-based threat data and open infrastructure data from OpenStreetMap, introducing a threat-based routing service. Second, the visualization of this data through cartographic generalization and schematization. This emphasizes critical areas along roads in a simple way and allows users to visually evaluate the impact natural hazards may have on infrastructure. We develop and illustrate this methodology with a case study of landslide threat for an area in Colombia.
Objective: Several different measures of heart rate variability, and particularly of respiratory sinus arrhythmia, are widely used in research and clinical applications. For many purposes it is important to know which features of heart rate variability are directly related to respiration and which are caused by other aspects of cardiac dynamics. Approach: Inspired by ideas from the theory of coupled oscillators, we use simultaneous measurements of respiratory and cardiac activity to perform a nonlinear disentanglement of the heart rate variability into the respiratory-related component and the rest. Main results: The theoretical consideration is illustrated by the analysis of 25 data sets from healthy subjects. In all cases we show how the disentanglement is manifested in the different measures of heart rate variability. Significance: The suggested technique can be exploited as a universal preprocessing tool, both for the analysis of respiratory influence on the heart rate and in cases when effects of other factors on the heart rate variability are in focus.
Objective: Several different measures of heart rate variability, and particularly of respiratory sinus arrhythmia, are widely used in research and clinical applications. For many purposes it is important to know which features of heart rate variability are directly related to respiration and which are caused by other aspects of cardiac dynamics. Approach: Inspired by ideas from the theory of coupled oscillators, we use simultaneous measurements of respiratory and cardiac activity to perform a nonlinear disentanglement of the heart rate variability into the respiratory-related component and the rest. Main results: The theoretical consideration is illustrated by the analysis of 25 data sets from healthy subjects. In all cases we show how the disentanglement is manifested in the different measures of heart rate variability. Significance: The suggested technique can be exploited as a universal preprocessing tool, both for the analysis of respiratory influence on the heart rate and in cases when effects of other factors on the heart rate variability are in focus.
In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics "up" and "down" states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.
To understand the evolution and morphology of planetary nebulae, a detailed knowledge of their central stars is required. Central stars that exhibit emission lines in their spectra, indicating stellar mass-loss allow to study the evolution of planetary nebulae in action. Emission line central stars constitute about 10 % of all central stars. Half of them are practically hydrogen-free Wolf-Rayet type central stars of the carbon sequence, [WC], that show strong emission lines of carbon and oxygen in their spectra. In this contribution we address the weak emission-lines central stars (wels). These stars are poorly analyzed and their hydrogen content is mostly unknown. We obtained optical spectra, that include the important Balmer lines of hydrogen, for four weak emission line central stars. We present the results of our analysis, provide spectral classification and discuss possible explanations for their formation and evolution.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.
Injection of fluids into deep saline aquifers causes a pore pressure increase in the storage formation, and thus displacement of resident brine. Via hydraulically conductive faults, brine may migrate upwards into shallower aquifers and lead to unwanted salinisation of potable groundwater resources. In the present study, we investigated different scenarios for a potential storage site in the Northeast German Basin using a three-dimensional (3-D) regional-scale model that includes four major fault zones. The focus was on assessing the impact of fault length and the effect of a secondary reservoir above the storage formation, as well as model boundary conditions and initial salinity distribution on the potential salinisation of shallow groundwater resources. We employed numerical simulations of brine injection as a representative fluid.
Our simulation results demonstrate that the lateral model boundary settings and the effective fault damage zone volume have the greatest influence on pressure build-up and development within the reservoir, and thus intensity and duration of fluid flow through the faults. Higher vertical pressure gradients for short fault segments or a small effective fault damage zone volume result in the highest salinisation potential due to a larger vertical fault height affected by fluid displacement. Consequently, it has a strong impact on the degree of shallow aquifer salinisation, whether a gradient in salinity exists or the saltwater-freshwater interface lies below the fluid displacement depth in the faults. A small effective fault damage zone volume or low fault permeability further extend the duration of fluid flow, which can persist for several tens to hundreds of years, if the reservoir is laterally confined. Laterally open reservoir boundaries, large effective fault damage zone volumes and intermediate reservoirs significantly reduce vertical brine migration and the potential of freshwater salinisation because the origin depth of displaced brine is located only a few decametres below the shallow aquifer in maximum.
The present study demonstrates that the existence of hydraulically conductive faults is not necessarily an exclusion criterion for potential injection sites, because salinisation of shallower aquifers strongly depends on initial salinity distribution, location of hydraulically conductive faults and their effective damage zone volumes as well as geological boundary conditions.
This study examines the course and driving forces of recent vegetation change in the Mongolian steppe. A sediment core covering the last 55years from a small closed-basin lake in central Mongolia was analyzed for its multi-proxy record at annual resolution. Pollen analysis shows that highest abundances of planted Poaceae and highest vegetation diversity occurred during 1977-1992, reflecting agricultural development in the lake area. A decrease in diversity and an increase in Artemisia abundance after 1992 indicate enhanced vegetation degradation in recent times, most probably because of overgrazing and farmland abandonment. Human impact is the main factor for the vegetation degradation within the past decades as revealed by a series of redundancy analyses, while climate change and soil erosion play subordinate roles. High Pediastrum (a green algae) influx, high atomic total organic carbon/total nitrogen (TOC/TN) ratios, abundant coarse detrital grains, and the decrease of C-13(org) and N-15 since about 1977 but particularly after 1992 indicate that abundant terrestrial organic matter and nutrients were transported into the lake and caused lake eutrophication, presumably because of intensified land use. Thus, we infer that the transition to a market economy in Mongolia since the early 1990s not only caused dramatic vegetation degradation but also affected the lake ecosystem through anthropogenic changes in the catchment area.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
Water deficit (drought stress) massively restricts plant growth and the yield of crops; reducing the deleterious effects of drought is therefore of high agricultural relevance. Drought triggers diverse cellular processes including the inhibition of photosynthesis, the accumulation of cell‐damaging reactive oxygen species and gene expression reprogramming, besides others. Transcription factors (TF) are central regulators of transcriptional reprogramming and expression of many TF genes is affected by drought, including members of the NAC family. Here, we identify the NAC factor JUNGBRUNNEN1 (JUB1) as a regulator of drought tolerance in tomato (Solanum lycopersicum). Expression of tomato JUB1 (SlJUB1) is enhanced by various abiotic stresses, including drought. Inhibiting SlJUB1 by virus‐induced gene silencing drastically lowers drought tolerance concomitant with an increase in ion leakage, an elevation of hydrogen peroxide (H2O2) levels and a decrease in the expression of various drought‐responsive genes. In contrast, overexpression of AtJUB1 from Arabidopsis thaliana increases drought tolerance in tomato, alongside with a higher relative leaf water content during drought and reduced H2O2 levels. AtJUB1 was previously shown to stimulate expression of DREB2A, a TF involved in drought responses, and of the DELLA genes GAI and RGL1. We show here that SlJUB1 similarly controls the expression of the tomato orthologs SlDREB1, SlDREB2 and SlDELLA. Furthermore, AtJUB1 directly binds to the promoters of SlDREB1, SlDREB2 and SlDELLA in tomato. Our study highlights JUB1 as a transcriptional regulator of drought tolerance and suggests considerable conservation of the abiotic stress‐related gene regulatory networks controlled by this NAC factor between Arabidopsis and tomato.
Flood damage has increased significantly and is expected to rise further in many parts of the world. For assessing potential changes in flood risk, this paper presents an integrated model chain quantifying flood hazards and losses while considering climate and land use changes. In the case study region, risk estimates for the present and the near future illustrate that changes in flood risk by 2030 are relatively low compared to historic periods. While the impact of climate change on the flood hazard and risk by 2030 is slight or negligible, strong urbanisation associated with economic growth contributes to a remarkable increase in flood risk. Therefore, it is recommended to frequently consider land use scenarios and economic developments when assessing future flood risks. Further, an adapted and sustainable risk management is necessary to encounter rising flood losses, in which non-structural measures are becoming more and more important. The case study demonstrates that adaptation by non-structural measures such as stricter land use regulations or enhancement of private precaution is capable of reducing flood risk by around 30 %. Ignoring flood risks, in contrast, always leads to further increasing losses-with our assumptions by 17 %. These findings underline that private precaution and land use regulation could be taken into account as low cost adaptation strategies to global climate change in many flood prone areas. Since such measures reduce flood risk regardless of climate or land use changes, they can also be recommended as no-regret measures.
Introduction
The transition from cross-fertilisation (outcrossing) to self-fertilisation (selfing) frequently coincides with changes towards a floral morphology that optimises self-pollination, the selfing syndrome. Population genetic studies have reported the existence of both outcrossing and selfing populations in Arabis alpina (Brassicaceae), which is an emerging model species for studying the molecular basis of perenniality and local adaptation. It is unknown whether its selfing populations have evolved a selfing syndrome.
Methods
Using macro-photography, microscopy and automated cell counting, we compared floral syndromes (size, herkogamy, pollen and ovule numbers) between three outcrossing populations from the Apuan Alps and three selfing populations from the Western and Central Alps (Maritime Alps and Dolomites). In addition, we genotyped the plants for 12 microsatellite loci to confirm previous measures of diversity and inbreeding coefficients based on allozymes, and performed Bayesian clustering.
Results and Discussion
Plants from the three selfing populations had markedly smaller flowers, less herkogamy and lower pollen production than plants from the three outcrossing populations, whereas pistil length and ovule number have remained constant. Compared to allozymes, microsatellite variation was higher, but revealed similar patterns of low diversity and high Fis in selfing populations. Bayesian clustering revealed two clusters. The first cluster contained the three outcrossing populations from the Apuan Alps, the second contained the three selfing populations from the Maritime Alps and Dolomites.
Conclusion
We conclude that in comparison to three outcrossing populations, three populations with high selfing rates are characterised by a flower morphology that is closer to the selfing syndrome. The presence of outcrossing and selfing floral syndromes within a single species will facilitate unravelling the genetic basis of the selfing syndrome, and addressing which selective forces drive its evolution.
Although hydrologic models provide hypothesis testing of complex dynamics occurring at catchments, fresh-water quality modeling is still incipient at many subtropical headwaters. In Brazil, a few modeling studies assess freshwater nutrients, limiting policies on hydrologic ecosystem services. This paper aims to compare freshwater quality scenarios under different land-use and land-cover (LULC) change, one of them related to ecosystem-based adaptation (EbA), in Brazilian headwaters. Using the spatially semi-distributed Soil and Water Assessment Tool (SWAT) model, nitrate, total phosphorous (TP) and sediment were modeled in catchments ranging from 7.2 to 1037 km(2). These head-waters were eligible areas of the Brazilian payment for ecosystem services (PES) projects in the Cantareira water supply system, which had supplied water to 9 million people in the Sao Paulo metropolitan region (SPMR). We considered SWAT modeling of three LULC scenarios: (i) recent past scenario (S1), with historical LULC in 1990; (ii) current land-use scenario (S2), with LULC for the period 2010-2015 with field validation; and (iii) future land-use scenario with PES (S2 + EbA). This latter scenario proposed forest cover restoration through EbA following the river basin plan by 2035. These three LULC scenarios were tested with a selected record of rainfall and evapotranspiration observed in 2006-2014, with the occurrence of extreme droughts. To assess hydrologic services, we proposed the hydrologic service index (HSI), as a new composite metric comparing water pollution levels (WPL) for reference catchments, related to the grey water footprint (greyWF) and water yield. On the one hand, water quality simulations allowed for the regionalization of greyWF at spatial scales under LULC scenarios. According to the critical threshold, HSI identified areas as less or more sustainable catchments. On the other hand, conservation practices simulated through the S2 + EbA scenario envisaged not only additional and viable best management practices (BMP), but also preventive decision-making at the headwaters of water supply systems.
Information on extreme precipitation for future climate is needed to assess the changes in the frequency and intensity of flooding. The primary source of information in climate change impact studies is climate model projections. However, due to the coarse resolution and biases of these models, they cannot be directly used in hydrological models. Hence, statistical downscaling is necessary to address climate change impacts at the catchment scale.
This study compares eight statistical downscaling methods (SDMs) often used in climate change impact studies. Four methods are based on change factors (CFs), three are bias correction (BC) methods, and one is a perfect prognosis method. The eight methods are used to downscale precipitation output from 15 regional climate models (RCMs) from the ENSEMBLES project for 11 catchments in Europe. The overall results point to an increase in extreme precipitation in most catchments in both winter and summer. For individual catchments, the downscaled time series tend to agree on the direction of the change but differ in the magnitude. Differences between the SDMs vary between the catchments and depend on the season analysed. Similarly, general conclusions cannot be drawn regarding the differences between CFs and BC methods. The performance of the BC methods during the control period also depends on the catchment, but in most cases they represent an improvement compared to RCM outputs. Analysis of the variance in the ensemble of RCMs and SDMs indicates that at least 30% and up to approximately half of the total variance is derived from the SDMs. This study illustrates the large variability in the expected changes in extreme precipitation and highlights the need for considering an ensemble of both SDMs and climate models. Recommendations are provided for the selection of the most suitable SDMs to include in the analysis.
Losses due to floods have dramatically increased over the past decades, and losses of companies, comprising direct and indirect losses, have a large share of the total economic losses. Thus, there is an urgent need to gain more quantitative knowledge about flood losses, particularly losses caused by business interruption, in order to mitigate the economic loss of companies. However, business interruption caused by floods is rarely assessed because of a lack of sufficiently detailed data. A survey was undertaken to explore processes influencing business interruption, which collected information on 557 companies affected by the severe flood in June 2013 in Germany. Based on this data set, the study aims to assess the business interruption of directly affected companies by means of a Random Forests model. Variables that influence the duration and costs of business interruption were identified by the variable importance measures of Random Forests. Additionally, Random Forest-based models were developed and tested for their capacity to estimate business interruption duration and associated costs. The water level was found to be the most important variable influencing the duration of business interruption. Other important variables, relating to the estimation of business interruption duration, are the warning time, perceived danger of flood recurrence and inundation duration. In contrast, the amount of business interruption costs is strongly influenced by the size of the company, as assessed by the number of employees, emergency measures undertaken by the company and the fraction of customers within a 50 km radius. These results provide useful information and methods for companies to mitigate their losses from business interruption. However, the heterogeneity of companies is relatively high, and sector-specific analyses were not possible due to the small sample size. Therefore, further sector-specific analyses on the basis of more flood loss data of companies are recommended.
Losses due to floods have dramatically increased over the past decades, and losses of companies, comprising direct and indirect losses, have a large share of the total economic losses. Thus, there is an urgent need to gain more quantitative knowledge about flood losses, particularly losses caused by business interruption, in order to mitigate the economic loss of companies. However, business interruption caused by floods is rarely assessed because of a lack of sufficiently detailed data. A survey was undertaken to explore processes influencing business interruption, which collected information on 557 companies affected by the severe flood in June 2013 in Germany. Based on this data set, the study aims to assess the business interruption of directly affected companies by means of a Random Forests model. Variables that influence the duration and costs of business interruption were identified by the variable importance measures of Random Forests. Additionally, Random Forest-based models were developed and tested for their capacity to estimate business interruption duration and associated costs. The water level was found to be the most important variable influencing the duration of business interruption. Other important variables, relating to the estimation of business interruption duration, are the warning time, perceived danger of flood recurrence and inundation duration. In contrast, the amount of business interruption costs is strongly influenced by the size of the company, as assessed by the number of employees, emergency measures undertaken by the company and the fraction of customers within a 50 km radius. These results provide useful information and methods for companies to mitigate their losses from business interruption. However, the heterogeneity of companies is relatively high, and sector-specific analyses were not possible due to the small sample size. Therefore, further sector-specific analyses on the basis of more flood loss data of companies are recommended.
We present 3D zero-beta ideal MHD simulations of the solar flare/CME event that occurred in Active Region 11060 on 2010 April 8. The initial magnetic configurations of the two simulations are stable nonlinear force-free field and unstable magnetic field models constructed by Su et al. (2011) using the flux rope insertion method. The MHD simulations confirm that the stable model relaxes to a stable equilibrium, while the unstable model erupts as a CME. Comparisons between observations and MHD simulations of the CME are also presented.
Plasma carotenoids, tocopherols, and retinol in the age-stratified (35–74 years) general population
(2016)
Blood micronutrient status may change with age. We analyzed plasma carotenoids, α-/γ-tocopherol, and retinol and their associations with age, demographic characteristics, and dietary habits (assessed by a short food frequency questionnaire) in a cross-sectional study of 2118 women and men (age-stratified from 35 to 74 years) of the general population from six European countries. Higher age was associated with lower lycopene and α-/β-carotene and higher β-cryptoxanthin, lutein, zeaxanthin, α-/γ-tocopherol, and retinol levels. Significant correlations with age were observed for lycopene (r = −0.248), α-tocopherol (r = 0.208), α-carotene (r = −0.112), and β-cryptoxanthin (r = 0.125; all p < 0.001). Age was inversely associated with lycopene (−6.5% per five-year age increase) and this association remained in the multiple regression model with the significant predictors (covariables) being country, season, cholesterol, gender, smoking status, body mass index (BMI (kg/m2)), and dietary habits. The positive association of α-tocopherol with age remained when all covariates including cholesterol and use of vitamin supplements were included (1.7% vs. 2.4% per five-year age increase). The association of higher β-cryptoxanthin with higher age was no longer statistically significant after adjustment for fruit consumption, whereas the inverse association of α-carotene with age remained in the fully adjusted multivariable model (−4.8% vs. −3.8% per five-year age increase). We conclude from our study that age is an independent predictor of plasma lycopene, α-tocopherol, and α-carotene.
The organic-carbon (OC) pool accumulated in Arctic permafrost (perennially frozen ground) equals the carbon stored in the modern atmosphere. To give an idea of how Yedoma region permafrost could respond under future climatic warming, we conducted a study to quantify the organic-matter quality (here defined as the intrinsic potential to be further transformed, decomposed, and mineralized) of late Pleistocene (Yedoma) and Holocene (thermokarst) deposits on the Buor-Khaya Peninsula, northeast Siberia. The objective of this study was to develop a stratigraphic classified organic-matter quality characterization. For this purpose the degree of organic-matter decomposition was estimated by using a multiproxy approach. We applied sedimentological (grain-size analyses, bulk density, ice content) and geochemical parameters (total OC, stable carbon isotopes (delta C-13),total organic carbon : nitrogen (C / N) ratios) as well as lipid biomarkers (n-alkanes, n-fatty acids, hopanes, triterpenoids, and biomarker indices, i.e., average chain length, carbon preference index (CPI), and higher-plant fatty-acid index (HPFA)). Our results show that the Yedoma and thermokarst organic-matter qualities for further decomposition exhibit no obvious degradation-depth trend. Relatively, the C / N and delta C-13 values and the HPFA index show a significantly better preservation of the organic matter stored in thermokarst deposits compared to Yedoma deposits. The CPI data suggest less degradation of the organic matter from both deposits, with a higher value for Yedoma organic matter. As the interquartile ranges of the proxies mostly over-lap, we interpret this as indicating comparable quality for further decomposition for both kinds of deposits with likely better thermokarst organic-matter quality. Supported by principal component analyses, the sediment parameters and quality proxies of Yedoma and thermokarst deposits could not be unambiguously separated from each other. This revealed that the organic-matter vulnerability is heterogeneous and depends on different decomposition trajectories and the previous decomposition and preservation history. Elucidating this was one of the major new contributions of our multiproxy study. With the addition of biomarker data, it was possible to show that permafrost organic-matter degradation likely occurs via a combination of (uncompleted) degradation cycles or a cascade of degradation steps rather than as a linear function of age or sediment facies. We conclude that the amount of organic matter in the studied sediments is high for mineral soils and of good quality and therefore susceptible to future decomposition. The lack of depth trends shows that permafrost acts like a giant freezer, preserving the constant quality of ancient organic matter. When undecomposed Yedoma organic matter is mobilized via thermokarst processes, the fate of this carbon depends largely on the environmental conditions; the carbon could be preserved in an undecomposed state till refreezing occurs. If modern input has occurred, thermokarst organic matter could be of a better quality for future microbial decomposition than that found in Yedoma deposits.
We study pattern-forming instabilities in reaction-advection-diffusion systems. We develop an approach based on Lyapunov-Bloch exponents to figure out the impact of a spatially periodic mixing flow on the stability of a spatially homogeneous state. We deal with the flows periodic in space that may have arbitrary time dependence. We propose a discrete in time model, where reaction, advection, and diffusion act as successive operators, and show that a mixing advection can lead to a pattern-forming instability in a two-component system where only one of the species is advected. Physically, this can be explained as crossing a threshold of Turing instability due to effective increase of one of the diffusion constants.
Low Earth orbiting geomagnetic satellite missions, such as the Swarm satellite mission, are the only means to monitor and investigate ionospheric currents on a global scale and to make in situ measurements of F region currents. High-precision geomagnetic satellite missions are also able to detect ionospheric currents during quiet-time geomagnetic conditions that only have few nanotesla amplitudes in the magnetic field. An efficient method to isolate the ionospheric signals from satellite magnetic field measurements has been the use of residuals between the observations and predictions from empirical geomagnetic models for other geomagnetic sources, such as the core and lithospheric field or signals from the quiet-time magnetospheric currents. This study aims at highlighting the importance of high-resolution magnetic field models that are able to predict the lithospheric field and that consider the quiet-time magnetosphere for reliably isolating signatures from ionospheric currents during geomagnetically quiet times. The effects on the detection of ionospheric currents arising from neglecting the lithospheric and magnetospheric sources are discussed on the example of four Swarm orbits during very quiet times. The respective orbits show a broad range of typical scenarios, such as strong and weak ionospheric signal (during day- and nighttime, respectively) superimposed over strong and weak lithospheric signals. If predictions from the lithosphere or magnetosphere are not properly considered, the amplitude of the ionospheric currents, such as the midlatitude Sq currents or the equatorial electrojet (EEJ), is modulated by 10–15 % in the examples shown. An analysis from several orbits above the African sector, where the lithospheric field is significant, showed that the peak value of the signatures of the EEJ is in error by 5 % in average when lithospheric contributions are not considered, which is in the range of uncertainties of present empirical models of the EEJ.
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
Primary progressive multiple sclerosis (PPMS) shows a highly variable disease progression with poor prognosis and a characteristic accumulation of disabilities in patients. These hallmarks of PPMS make it difficult to diagnose and currently impossible to efficiently treat. This study aimed to identify plasma metabolite profiles that allow diagnosis of PPMS and its differentiation from the relapsing remitting subtype (RRMS), primary neurodegenerative disease (Parkinson’s disease, PD), and healthy controls (HCs) and that significantly change during the disease course and could serve as surrogate markers of multiple sclerosis (MS)-associated neurodegeneration over time. We applied untargeted high-resolution metabolomics to plasma samples to identify PPMS-specific signatures, validated our findings in independent sex- and age-matched PPMS and HC cohorts and built discriminatory models by partial least square discriminant analysis (PLS-DA). This signature was compared to sex- and age-matched RRMS patients, to patients with PD and HC. Finally, we investigated these metabolites in a longitudinal cohort of PPMS patients over a 24-month period. PLS-DA yielded predictive models for classification along with a set of 20 PPMS-specific informative metabolite markers. These metabolites suggest disease-specific alterations in glycerophospholipid and linoleic acid pathways. Notably, the glycerophospholipid LysoPC(20:0) significantly decreased during the observation period. These findings show potential for diagnosis and disease course monitoring, and might serve as biomarkers to assess treatment efficacy in future clinical trials for neuroprotective MS therapies.
TerraSAR-X time series fill a gap in spaceborne snowmelt monitoring of small arctic catchments
(2018)
The timing of snowmelt is an important turning point in the seasonal cycle of small Arctic catchments. The TerraSAR-X (TSX) satellite mission is a synthetic aperture radar system (SAR) with high potential to measure the high spatiotemporal variability of snow cover extent (SCE) and fractional snow cover (FSC) on the small catchment scale. We investigate the performance of multi-polarized and multi-pass TSX X-Band SAR data in monitoring SCE and FSC in small Arctic tundra catchments of Qikiqtaruk (Herschel Island) off the Yukon Coast in the Western Canadian Arctic. We applied a threshold based segmentation on ratio images between TSX images with wet snow and a dry snow reference, and tested the performance of two different thresholds. We quantitatively compared TSX- and Landsat 8-derived SCE maps using confusion matrices and analyzed the spatiotemporal dynamics of snowmelt from 2015 to 2017 using TSX, Landsat 8 and in situ time lapse data. Our data showed that the quality of SCE maps from TSX X-Band data is strongly influenced by polarization and to a lesser degree by incidence angle. VH polarized TSX data performed best in deriving SCE when compared to Landsat 8. TSX derived SCE maps from VH polarization detected late lying snow patches that were not detected by Landsat 8. Results of a local assessment of TSX FSC against the in situ data showed that TSX FSC accurately captured the temporal dynamics of different snow melt regimes that were related to topographic characteristics of the studied catchments. Both in situ and TSX FSC showed a longer snowmelt period in a catchment with higher contributions of steep valleys and a shorter snowmelt period in a catchment with higher contributions of upland terrain. Landsat 8 had fundamental data gaps during the snowmelt period in all 3 years due to cloud cover. The results also revealed that by choosing a positive threshold of 1 dB, detection of ice layers due to diurnal temperature variations resulted in a more accurate estimation of snow cover than a negative threshold that detects wet snow alone. We find that TSX X-Band data in VH polarization performs at a comparable quality to Landsat 8 in deriving SCE maps when a positive threshold is used. We conclude that TSX data polarization can be used to accurately monitor snowmelt events at high temporal and spatial resolution, overcoming limitations of Landsat 8, which due to cloud related data gaps generally only indicated the onset and end of snowmelt.
TerraSAR-X time series fill a gap in spaceborne snowmelt monitoring of small Arctic catchments
(2018)
The timing of snowmelt is an important turning point in the seasonal cycle of small Arctic catchments. The TerraSAR-X (TSX) satellite mission is a synthetic aperture radar system (SAR) with high potential to measure the high spatiotemporal variability of snow cover extent (SCE) and fractional snow cover (FSC) on the small catchment scale. We investigate the performance of multi-polarized and multi-pass TSX X-Band SAR data in monitoring SCE and FSC in small Arctic tundra catchments of Qikiqtaruk (Herschel Island) off the Yukon Coast in the Western Canadian Arctic. We applied a threshold based segmentation on ratio images between TSX images with wet snow and a dry snow reference, and tested the performance of two different thresholds. We quantitatively compared TSX- and Landsat 8-derived SCE maps using confusion matrices and analyzed the spatiotemporal dynamics of snowmelt from 2015 to 2017 using TSX, Landsat 8 and in situ time lapse data. Our data showed that the quality of SCE maps from TSX X-Band data is strongly influenced by polarization and to a lesser degree by incidence angle. VH polarized TSX data performed best in deriving SCE when compared to Landsat 8. TSX derived SCE maps from VH polarization detected late lying snow patches that were not detected by Landsat 8. Results of a local assessment of TSX FSC against the in situ data showed that TSX FSC accurately captured the temporal dynamics of different snow melt regimes that were related to topographic characteristics of the studied catchments. Both in situ and TSX FSC showed a longer snowmelt period in a catchment with higher contributions of steep valleys and a shorter snowmelt period in a catchment with higher contributions of upland terrain. Landsat 8 had fundamental data gaps during the snowmelt period in all 3 years due to cloud cover. The results also revealed that by choosing a positive threshold of 1 dB, detection of ice layers due to diurnal temperature variations resulted in a more accurate estimation of snow cover than a negative threshold that detects wet snow alone. We find that TSX X-Band data in VH polarization performs at a comparable quality to Landsat 8 in deriving SCE maps when a positive threshold is used. We conclude that TSX data polarization can be used to accurately monitor snowmelt events at high temporal and spatial resolution, overcoming limitations of Landsat 8, which due to cloud related data gaps generally only indicated the onset and end of snowmelt.
In this paper, I review observational evidence from spectroscopy and polarimetry for the presence of small and large scale structure in the winds of Wolf-Rayet (WR) stars. Clumping is known to be ubiquitous in the winds of these stars and many of its characteristics can be deduced from spectroscopic time-series and polarisation lightcurves. Conversely, a much smaller fraction of WR stars have been shown to harbour larger scale structures in their wind (∼ 1/5) while they are thought to be present is the winds of most of their O-star ancestors. The reason for this difference is still unknown.
An efficient electrocatalytic biosensor for sulfite detection was developed by co-immobilizing sulfite oxidase and cytochrome c with polyaniline sulfonic acid in a layer-by-layer assembly. QCM, UV-Vis spectroscopy and cyclic voltammetry revealed increasing loading of electrochemically active protein with the formation of multilayers. The sensor operates reagentless at low working potential. A catalytic oxidation current was detected in the presence of sulfite at the modified gold electrode, polarized at +0.1 V ( vs. Ag/AgCl 1 M KCl). The stability of the biosensor performance was characterized and optimized. A 17-bilayer electrode has a linear range between 1 and 60 mu M sulfite with a sensitivity of 2.19 mA M-1 sulfite and a response time of 2 min. The electrode retained a stable response for 3 days with a serial reproducibility of 3.8% and lost 20% of sensitivity after 5 days of operation. It is possible to store the sensor in a dry state for more than 2 months. The multilayer electrode was used for determination of sulfite in unspiked and spiked samples of red and white wine. The recovery and the specificity of the signals were evaluated for each sample.
Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker‐assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT‐PCR and GC‐MS profiling, respectively. Transcript marker candidates were selected from a published RNA‐Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions.
About a quarter of anthropogenic CO2 emissions are currently taken up by the oceans, decreasing seawater pH. We performed a mesocosm experiment in the Baltic Sea in order to investigate the consequences of increasing CO2 levels on pelagic carbon fluxes. A gradient of different CO2 scenarios, ranging from ambient (similar to 370 mu atm) to high (similar to 1200 mu atm), were set up in mesocosm bags (similar to 55m(3)). We determined standing stocks and temporal changes of total particulate carbon (TPC), dissolved organic carbon (DOC), dissolved inorganic carbon (DIC), and particulate organic carbon (POC) of specific plankton groups. We also measured carbon flux via CO2 exchange with the atmosphere and sedimentation (export), and biological rate measurements of primary production, bacterial production, and total respiration. The experiment lasted for 44 days and was divided into three different phases (I: t0-t16; II: t17-t30; III: t31-t43). Pools of TPC, DOC, and DIC were approximately 420, 7200, and 25 200 mmol Cm-2 at the start of the experiment, and the initial CO2 additions increased the DIC pool by similar to 7% in the highest CO2 treatment. Overall, there was a decrease in TPC and increase of DOC over the course of the experiment. The decrease in TPC was lower, and increase in DOC higher, in treatments with added CO2. During phase I the estimated gross primary production (GPP) was similar to 100 mmol C m(-2) day(-1), from which 75-95% was respired, similar to 1% ended up in the TPC (including export), and 5-25% was added to the DOC pool. During phase II, the respiration loss increased to similar to 100% of GPP at the ambient CO2 concentration, whereas respiration was lower (85-95% of GPP) in the highest CO2 treatment. Bacterial production was similar to 30% lower, on average, at the highest CO2 concentration than in the controls during phases II and III. This resulted in a higher accumulation of DOC and lower reduction in the TPC pool in the elevated CO2 treatments at the end of phase II extending throughout phase III. The "extra" organic carbon at high CO2 remained fixed in an increasing biomass of small-sized plankton and in the DOC pool, and did not transfer into large, sinking aggregates. Our results revealed a clear effect of increasing CO2 on the carbon budget and mineralization, in particular under nutrient limited conditions. Lower carbon loss processes (respiration and bacterial remineralization) at elevated CO2 levels resulted in higher TPC and DOC pools than ambient CO2 concentration. These results highlight the importance of addressing not only net changes in carbon standing stocks but also carbon fluxes and budgets to better disentangle the effects of ocean acidification.
Hantavirus assembly and budding are governed by the surface glycoproteins Gn and Gc. In this study, we investigated the glycoproteins of Puumala, the most abundant Hantavirus species in Europe, using fluorescently labeled wild-type constructs and cytoplasmic tail (CT) mutants. We analyzed their intracellular distribution, co-localization and oligomerization, applying comprehensive live, single-cell fluorescence techniques, including confocal microscopy, imaging flow cytometry, anisotropy imaging and Number&Brightness analysis. We demonstrate that Gc is significantly enriched in the Golgi apparatus in absence of other viral components, while Gn is mainly restricted to the endoplasmic reticulum (ER). Importantly, upon co-expression both glycoproteins were found in the Golgi apparatus. Furthermore, we show that an intact CT of Gc is necessary for efficient Golgi localization, while the CT of Gn influences protein stability. Finally, we found that Gn assembles into higher-order homo-oligomers, mainly dimers and tetramers, in the ER while Gc was present as mixture of monomers and dimers within the Golgi apparatus. Our findings suggest that PUUV Gc is the driving factor of the targeting of Gc and Gn to the Golgi region, while Gn possesses a significantly stronger self-association potential.