Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (20693)
- Dissertation (3145)
- Postprint (2347)
- Monographie/Sammelband (1209)
- Sonstiges (643)
- Rezension (613)
- Preprint (529)
- Konferenzveröffentlichung (449)
- Teil eines Buches (Kapitel) (220)
- Arbeitspapier (173)
Sprache
- Englisch (30234) (entfernen)
Schlagworte
- climate change (169)
- Germany (98)
- machine learning (76)
- diffusion (72)
- German (66)
- morphology (66)
- Arabidopsis thaliana (64)
- anomalous diffusion (56)
- stars: massive (56)
- Climate change (53)
Institut
- Institut für Physik und Astronomie (4885)
- Institut für Biochemie und Biologie (4617)
- Institut für Geowissenschaften (3229)
- Institut für Chemie (2882)
- Institut für Mathematik (1856)
- Department Psychologie (1469)
- Institut für Ernährungswissenschaft (1015)
- Department Linguistik (995)
- Wirtschaftswissenschaften (856)
- Institut für Informatik und Computational Science (834)
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
Kijko et al. (2016) present various methods to estimate parameters that are relevant for probabilistic seismic-hazard assessment. One of these parameters, although not the most influential, is the maximum possible earthquake magnitude m(max). I show that the proposed estimation of m(max) is based on an erroneous equation related to a misuse of the estimator in Cooke (1979) and leads to unstable results. So far, reported finite estimations of m(max) arise from data selection, because the estimator in Kijko et al. (2016) diverges with finite probability. This finding is independent of the assumed distribution of earthquake magnitudes. For the specific choice of the doubly truncated Gutenberg-Richter distribution, I illustrate the problems by deriving explicit equations. Finally, I conclude that point estimators are generally not a suitable approach to constrain m(max).
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
Convergence of the frequency-magnitude distribution of global earthquakes - maybe in 200 years
(2013)
I study the ability to estimate the tail of the frequency-magnitude distribution of global earthquakes. While power-law scaling for small earthquakes is accepted by support of data, the tail remains speculative. In a recent study, Bell et al. (2013) claim that the frequency-magnitude distribution of global earthquakes converges to a tapered Pareto distribution. I show that this finding results from data fitting errors, namely from the biased maximum likelihood estimation of the corner magnitude theta in strongly undersampled models. In particular, the estimation of theta depends solely on the few largest events in the catalog. Taking this into account, I compare various state-of-the-art models for the global frequency-magnitude distribution. After discarding undersampled models, the remaining ones, including the unbounded Gutenberg-Richter distribution, perform all equally well and are, therefore, indistinguishable. Convergence to a specific distribution, if it ever takes place, requires about 200 years homogeneous recording of global seismicity, at least.
Downscaling of microfluidic cell culture and detection devices for electrochemical monitoring has mostly focused on miniaturization of the microfluidic chips which are often designed for specific applications and therefore lack functional flexibility. We present a compact microfluidic cell culture and electrochemical analysis platform with in-built fluid handling and detection, enabling complete cell based assays comprising on-line electrode cleaning, sterilization, surface functionalization, cell seeding, cultivation and electrochemical real-time monitoring of cellular dynamics. To demonstrate the versatility and multifunctionality of the platform, we explored amperometric monitoring of intracellular redox activity in yeast (Saccharomyces cerevisiae) and detection of exocytotically released dopamine from rat pheochromocytoma cells (PC12). Electrochemical impedance spectroscopy was used in both applications for monitoring cell sedimentation and adhesion as well as proliferation in the case of PC12 cells. The influence of flow rate on the signal amplitude in the detection of redox metabolism as well as the effect of mechanical stimulation on dopamine release were demonstrated using the programmable fluid handling capability. The here presented platform is aimed at applications utilizing cell based assays, ranging from e.g. monitoring of drug effects in pharmacological studies, characterization of neural stem cell differentiation, and screening of genetically modified microorganisms to environmental monitoring.
The paper studies catalytic super-Brownian motion on the real line, where the branching rate is controlled by a catalyst. D. A. Dawson, K. Fleischmann and S. Roelly showed, for a broad class of catalysts, that, as for constant branching, the processes are absolutely continuous measures. This paper considers a class of catalysts, called moderate, which must satisfy a uniform boundedness condition and a condition controlling the degree of singularity---essentially that the mass of catalyst in small balls should (uniformly) be of order r^a, where a>0. The main result of this paper shows that for this class of catalysts there is a continuous density field for the process. Moreover the density is the unique solution (in law) of an appropriate SPDE.
The author considers the heat equation in dimension one with singular drift and inhomogeneous space-time white noise. In particular, the quadratic variation measure of the white noise is not required to be absolutely continuous w.r.t. the Lebesgue measure, neither in space nor in time. Under some assumptions the author gives statements on strong and weak existence as well as strong and weak uniqueness of continuous solutions.
The use of unilateral force under George W. Bush is not a new phenomenon in US foreign policy. As the author argues, it is merely a continuation of Bill Clinton’s foreign policy and is deeply rooted in both the foreign policy traditions of Jacksonianism and Wilsonianism. The analysis concludes that Clinton used unilateralist foreign policy with a 'smile' whereas the Bush administration uses it with an attitude.
Hysteresis in the pinning-depinning transitions of spiral waves rotating around a hole in a circular shaped two- dimensional excitable medium is studied both by use of the continuation software AUTO and by direct numerical integration of the reaction-diffusion equations for the FitzHugh-Nagumo model. In order to clarify the role of different factors in this phenomenon, a kinematical description is applied. It is found that the hysteresis phenomenon computed for the reaction-diffusion model can be reproduced qualitatively only when a nonlinear eikonal equation (i.e. velocity- curvature relationship) is assumed. However, to obtain quantitative agreement, the dispersion relation has to be taken into account.
Sub-seasonal thaw slump mass wasting is not consistently energy limited at the landscape scale
(2018)
Predicting future thaw slump activity requires a sound understanding of the atmospheric drivers and geomorphic controls on mass wasting across a range of timescales. On sub-seasonal timescales, sparse measurements indicate that mass wasting at active slumps is often limited by the energy available for melting ground ice, but other factors such as rainfall or the formation of an insulating veneer may also be relevant. To study the sub-seasonal drivers, we derive topographic changes from single-pass radar interferometric data acquired by the TanDEM-X satellites. The estimated elevation changes at 12m resolution complement the commonly observed planimetric retreat rates by providing information on volume losses. Their high vertical precision (around 30 cm), frequent observations (11 days) and large coverage (5000 km(2)) allow us to track mass wasting as drivers such as the available energy change during the summer of 2015 in two study regions. We find that thaw slumps in the Tuktoyaktuk coastlands, Canada, are not energy limited in June, as they undergo limited mass wasting (height loss of around 0 cm day 1) despite the ample available energy, suggesting the widespread presence of early season insulating snow or debris veneer. Later in summer, height losses generally increase (around 3 cm day 1), but they do so in distinct ways. For many slumps, mass wasting tracks the available energy, a temporal pattern that is also observed at coastal yedoma cliffs on the Bykovsky Peninsula, Russia. However, the other two common temporal trajectories are asynchronous with the available energy, as they track strong precipitation events or show a sudden speed-up in late August respectively. The observed temporal patterns are poorly related to slump characteristics like the headwall height. The contrasting temporal behaviour of nearby thaw slumps highlights the importance of complex local and temporally varying controls on mass wasting.
Sub-seasonal thaw slump mass wasting is not consistently energy limited at the landscape scale
(2018)
Predicting future thaw slump activity requires a sound understanding of the atmospheric drivers and geomorphic controls on mass wasting across a range of timescales. On sub-seasonal timescales, sparse measurements indicate that mass wasting at active slumps is often limited by the energy available for melting ground ice, but other factors such as rainfall or the formation of an insulating veneer may also be relevant. To study the sub-seasonal drivers, we derive topographic changes from single-pass radar interferometric data acquired by the TanDEM-X satellites. The estimated elevation changes at 12m resolution complement the commonly observed planimetric retreat rates by providing information on volume losses. Their high vertical precision (around 30 cm), frequent observations (11 days) and large coverage (5000 km(2)) allow us to track mass wasting as drivers such as the available energy change during the summer of 2015 in two study regions. We find that thaw slumps in the Tuktoyaktuk coastlands, Canada, are not energy limited in June, as they undergo limited mass wasting (height loss of around 0 cm day 1) despite the ample available energy, suggesting the widespread presence of early season insulating snow or debris veneer. Later in summer, height losses generally increase (around 3 cm day 1), but they do so in distinct ways. For many slumps, mass wasting tracks the available energy, a temporal pattern that is also observed at coastal yedoma cliffs on the Bykovsky Peninsula, Russia. However, the other two common temporal trajectories are asynchronous with the available energy, as they track strong precipitation events or show a sudden speed-up in late August respectively. The observed temporal patterns are poorly related to slump characteristics like the headwall height. The contrasting temporal behaviour of nearby thaw slumps highlights the importance of complex local and temporally varying controls on mass wasting.
Necrotrophic as well as saprophytic small-spored Altemaria (A.) species are annually responsible for major losses of agricultural products, such as cereal crops, associated with the contamination of food and feedstuff with potential health-endangering Altemaria toxins. Knowledge of the metabolic capabilities of different species-groups to form mycotoxins is of importance for a reliable risk assessment. 93 Altemaria strains belonging to the four species groups Alternaria tenuissima, A. arborescens, A. altemata, and A. infectoria were isolated from winter wheat kernels harvested from fields in Germany and Russia and incubated under equal conditions. Chemical analysis by means of an HPLC-MS/MS multi-Alternaria-toxin-method showed that 95% of all strains were able to form at least one of the targeted 17 non-host specific Altemaria toxins. Simultaneous production of up to 15 (modified) Altemaria toxins by members of the A. tenuissima, A. arborescens, A. altemata species-groups and up to seven toxins by A. infectoria strains was demonstrated. Overall tenuazonic acid was the most extensively formed mycotoxin followed by alternariol and alternariol mono methylether, whereas altertoxin I was the most frequently detected toxin. Sulfoconjugated modifications of alternariol, alternariol mono methylether, altenuisol and altenuene were frequently determined. Unknown perylene quinone derivatives were additionally detected. Strains of the species-group A. infectoria could be segregated from strains of the other three species-groups due to significantly lower toxin levels and the specific production of infectopyrone. Apart from infectopyrone, alterperylenol was also frequently produced by 95% of the A. infectoria strains. Neither by the concentration nor by the composition of the targeted Altemaria toxins a differentiation between the species-groups A. altemata, A. tenuissima and A. arborescens was possible.
Necrotrophic as well as saprophytic small-spored Altemaria (A.) species are annually responsible for major losses of agricultural products, such as cereal crops, associated with the contamination of food and feedstuff with potential health-endangering Altemaria toxins. Knowledge of the metabolic capabilities of different species-groups to form mycotoxins is of importance for a reliable risk assessment. 93 Altemaria strains belonging to the four species groups Alternaria tenuissima, A. arborescens, A. altemata, and A. infectoria were isolated from winter wheat kernels harvested from fields in Germany and Russia and incubated under equal conditions. Chemical analysis by means of an HPLC-MS/MS multi-Alternaria-toxin-method showed that 95% of all strains were able to form at least one of the targeted 17 non-host specific Altemaria toxins. Simultaneous production of up to 15 (modified) Altemaria toxins by members of the A. tenuissima, A. arborescens, A. altemata species-groups and up to seven toxins by A. infectoria strains was demonstrated. Overall tenuazonic acid was the most extensively formed mycotoxin followed by alternariol and alternariol mono methylether, whereas altertoxin I was the most frequently detected toxin. Sulfoconjugated modifications of alternariol, alternariol mono methylether, altenuisol and altenuene were frequently determined. Unknown perylene quinone derivatives were additionally detected. Strains of the species-group A. infectoria could be segregated from strains of the other three species-groups due to significantly lower toxin levels and the specific production of infectopyrone. Apart from infectopyrone, alterperylenol was also frequently produced by 95% of the A. infectoria strains. Neither by the concentration nor by the composition of the targeted Altemaria toxins a differentiation between the species-groups A. altemata, A. tenuissima and A. arborescens was possible.
Alternaria (A.) is a genus of widespread fungi capable of producing numerous, possibly health-endangering Alternaria toxins (ATs), which are usually not the focus of attention. The formation of ATs depends on the species and complex interactions of various environmental factors and is not fully understood. In this study the influence of temperature (7 degrees C, 25 degrees C), substrate (rice, wheat kernels) and incubation time (4, 7, and 14 days) on the production of thirteen ATs and three sulfoconjugated ATs by three different Alternaria isolates from the species groups A. tenuissima and A. infectoria was determined. High-performance liquid chromatography coupled with tandem mass spectrometry was used for quantification. Under nearly all conditions, tenuazonic acid was the most extensively produced toxin. At 25 degrees C and with increasing incubation time all toxins were formed in high amounts by the two A. tenuissima strains on both substrates with comparable mycotoxin profiles. However, for some of the toxins, stagnation or a decrease in production was observed from day 7 to 14. As opposed to the A. tenuissima strains, the A. infectoria strain only produced low amounts of ATs, but high concentrations of stemphyltoxin III. The results provide an essential insight into the quantitative in vitro AT formation under different environmental conditions, potentially transferable to different field and storage conditions.
Spotlight on the underdogs
(2017)
Alternaria (A.) is a genus of widespread fungi capable of producing numerous, possibly health-endangering Alternaria toxins (ATs), which are usually not the focus of attention. The formation of ATs depends on the species and complex interactions of various environmental factors and is not fully understood. In this study the influence of temperature (7 °C, 25 °C), substrate (rice, wheat kernels) and incubation time (4, 7, and 14 days) on the production of thirteen ATs and three sulfoconjugated ATs by three different Alternaria isolates from the species groups A. tenuissima and A. infectoria was determined. High-performance liquid chromatography coupled with tandem mass spectrometry was used for quantification. Under nearly all conditions, tenuazonic acid was the most extensively produced toxin. At 25 °C and with increasing incubation time all toxins were formed in high amounts by the two A. tenuissima strains on both substrates with comparable mycotoxin profiles. However, for some of the toxins, stagnation or a decrease in production was observed from day 7 to 14. As opposed to the A. tenuissima strains, the A. infectoria strain only produced low amounts of ATs, but high concentrations of stemphyltoxin III. The results provide an essential insight into the quantitative in vitro AT formation under different environmental conditions, potentially transferable to different field and storage conditions
We recently demonstrated that the sympathetic nervous system can be voluntarily activated following a training program consisting of cold exposure, breathing exercises, and meditation. This resulted in profound attenuation of the systemic inflammatory response elicited by lipopolysaccharide (LPS) administration. Herein, we assessed whether this training program affects the plasma metabolome and if these changes are linked to the immunomodulatory effects observed. A total of 224 metabolites were identified in plasma obtained from 24 healthy male volunteers at six timepoints, of which 98 were significantly altered following LPS administration. Effects of the training program were most prominent shortly after initiation of the acquired breathing exercises but prior to LPS administration, and point towards increased activation of the Cori cycle. Elevated concentrations of lactate and pyruvate in trained individuals correlated with enhanced levels of anti-inflammatory interleukin (IL)-10. In vitro validation experiments revealed that co-incubation with lactate and pyruvate enhances IL-10 production and attenuates the release of pro-inflammatory IL-1 beta and IL-6 by LPS-stimulated leukocytes. Our results demonstrate that practicing the breathing exercises acquired during the training program results in increased activity of the Cori cycle. Furthermore, this work uncovers an important role of lactate and pyruvate in the anti-inflammatory phenotype observed in trained subjects.
We recently demonstrated that the sympathetic nervous system can be voluntarily activated following a training program consisting of cold exposure, breathing exercises, and meditation. This resulted in profound attenuation of the systemic inflammatory response elicited by lipopolysaccharide (LPS) administration. Herein, we assessed whether this training program affects the plasma metabolome and if these changes are linked to the immunomodulatory effects observed. A total of 224 metabolites were identified in plasma obtained from 24 healthy male volunteers at six timepoints, of which 98 were significantly altered following LPS administration. Effects of the training program were most prominent shortly after initiation of the acquired breathing exercises but prior to LPS administration, and point towards increased activation of the Cori cycle. Elevated concentrations of lactate and pyruvate in trained individuals correlated with enhanced levels of anti-inflammatory interleukin (IL)-10. In vitro validation experiments revealed that co-incubation with lactate and pyruvate enhances IL-10 production and attenuates the release of pro-inflammatory IL-1 beta and IL-6 by LPS-stimulated leukocytes. Our results demonstrate that practicing the breathing exercises acquired during the training program results in increased activity of the Cori cycle. Furthermore, this work uncovers an important role of lactate and pyruvate in the anti-inflammatory phenotype observed in trained subjects.
During hopping an early burst can be observed in the EMG from the soleus muscle starting about 45 ms after touch-down. It may be speculated that this early EMG burst is a stretch reflex response superimposed on activity from a supra-spinal origin. We hypothesised that if a stretch reflex indeed contributes to the early EMG burst, then advancing or delaying the touch-down without the subject's knowledge should similarly advance or delay the burst. This was indeed the case when touch-down was advanced or delayed by shifting the height of a programmable platform up or down between two hops and this resulted in a correspondent shift of the early EMG burst. Our second hypothesis was that the motor cortex contributes to the first EMG burst during hopping. If so, inhibition of the motor cortex would reduce the magnitude of the burst. By applying a low-intensity magnetic stimulus it was possible to inhibit the motor cortex and this resulted in a suppression of the early EMG burst. These results suggest that sensory feedback and descending drive from the motor cortex are integrated to drive the motor neuron pool during the early EMG burst in hopping. Thus, simple reflexes work in concert with higher order structures to produce this repetitive movement.
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
We present a theoretical framework for the analysis of the statistical properties of thermal fluctuations on a lossy transmission line. A quantization scheme of the electrical signals in the transmission line is formulated. We discuss two applications in detail. Noise spectra at finite temperature for voltage and current are shown to deviate significantly from the Johnson-Nyquist limit, and they depend on the position on the transmission line. We analyze the spontaneous emission, at low temperature, of a Rydberg atom and its resonant enhancement due to vacuum fluctuations in a capacitively coupled transmission line. The theory can also be applied to study the performance of microscale and nanoscale devices, including high-resolution sensors and quantum information processors
We present a momentum transfer mechanism mediated by electromagnetic fields that originates in a system of two nearby molecules: one excited (donor D*) and the other in ground state (acceptor A). An intermolecular force related to fluorescence resonant energy or Forster transfer (FRET) arises in the unstable D* A molecular system, which differs from the equilibrium van der Waals interaction. Due to the its finite lifetime, a mechanical impulse is imparted to the relative motion in the system. We analyze the FRET impulse when the molecules are embedded in free space and find that its magnitude can be much greater than the single recoil photon momentum, getting comparable with the thermal momentum (Maxwell-Boltzmann distribution) at room temperature. In addition, we propose that this FRET impulse can be exploited in the generation of acoustic waves inside a film containing layers of donor and acceptor molecules, when a picosecond laser pulse excites the donors. This acoustic transient is distinguishable from that produced by thermal stress due to laser absorption, and may therefore play a role in photoacoustic spectroscopy. The effect can be seen as exciting a vibrating system like a string or organ pipe with light; it may be used as an opto-mechanical transducer.
Home range size and resource use of breeding and non-breeding white storks along a land use gradient
(2018)
Biotelemetry is increasingly used to study animal movement at high spatial and temporal resolution and guide conservation and resource management. Yet, limited sample sizes and variation in space and habitat use across regions and life stages may compromise robustness of behavioral analyses and subsequent conservation plans. Here, we assessed variation in (i) home range sizes, (ii) home range selection, and (iii) fine-scale resource selection of white storks across breeding status and regions and test model transferability. Three study areas were chosen within the Central German breeding grounds ranging from agricultural to fluvial and marshland. We monitored GPS-locations of 62 adult white storks equipped with solar-charged GPS/3D-acceleration (ACC) transmitters in 2013-2014. Home range sizes were estimated using minimum convex polygons. Generalized linear mixed models were used to assess home range selection and fine-scale resource selection by relating the home ranges and foraging sites to Corine habitat variables and normalized difference vegetation index in a presence/pseudo-absence design. We found strong variation in home range sizes across breeding stages with significantly larger home ranges in non-breeding compared to breeding white storks, but no variation between regions. Home range selection models had high explanatory power and well predicted overall density of Central German white stork breeding pairs. Also, they showed good transferability across regions and breeding status although variable importance varied considerably. Fine-scale resource selection models showed low explanatory power. Resource preferences differed both across breeding status and across regions, and model transferability was poor. Our results indicate that habitat selection of wild animals may vary considerably within and between populations, and is highly scale dependent. Thereby, home range scale analyses show higher robustness whereas fine-scale resource selection is not easily predictable and not transferable across life stages and regions. Such variation may compromise management decisions when based on data of limited sample size or limited regional coverage. We thus recommend home range scale analyses and sampling designs that cover diverse regional landscapes and ensure robust estimates of habitat suitability to conserve wild animal populations.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
SDM performance varied for different range dynamics. Prediction accuracies decreased when abrupt range shifts occurred as species were outpaced by the rate of climate change, and increased again when a new equilibrium situation was realised. When ranges contracted, prediction accuracies increased as the absences were predicted well. Far- dispersing species were faster in tracking climate change, and were predicted more accurately by SDMs than short- dispersing species. BRTs mostly outperformed GLMs. The presence of a predator, and the inclusion of its incidence as an environmental predictor, made BRTs and GLMs perform similarly. Results are discussed in light of other studies dealing with effects of ecological traits and processes on SDM performance. Perspectives are given on further advancements of SDMs and for possible interfaces with more mechanistic approaches in order to improve predictions under environmental change.
Empirical species distribution models (SDMs) constitute often the tool of choice for the assessment of rapid climate change effects on species vulnerability. Conclusions regarding extinction risks might be misleading, however, because SDMs do not explicitly incorporate dispersal or other demographic processes. Here, we supplement SDMs with a dynamic population model 1) to predict climate-induced range dynamics for black grouse in Switzerland, 2) to compare direct and indirect measures of extinction risks, and 3) to quantify uncertainty in predictions as well as the sources of that uncertainty. To this end, we linked models of habitat suitability to a spatially explicit, individual-based model. In an extensive sensitivity analysis, we quantified uncertainty in various model outputs introduced by different SDM algorithms, by different climate scenarios and by demographic model parameters. Potentially suitable habitats were predicted to shift uphill and eastwards. By the end of the 21st century, abrupt habitat losses were predicted in the western Prealps for some climate scenarios. In contrast, population size and occupied area were primarily controlled by currently negative population growth and gradually declined from the beginning of the century across all climate scenarios and SDM algorithms. However, predictions of population dynamic features were highly variable across simulations. Results indicate that inferring extinction probabilities simply from the quantity of suitable habitat may underestimate extinction risks because this may ignore important interactions between life history traits and available habitat. Also, in dynamic range predictions uncertainty in SDM algorithms and climate scenarios can become secondary to uncertainty in dynamic model components. Our study emphasises the need for principal evaluation tools like sensitivity analysis in order to assess uncertainty and robustness in dynamic range predictions. A more direct benefit of such robustness analysis is an improved mechanistic understanding of dynamic species responses to climate change.
Data limitations can lead to unrealistic fits of predictive species distribution models (SDMs) and spurious extrapolation to novel environments. Here, we want to draw attention to novel combinations of environmental predictors that are within the sampled range of individual predictors but are nevertheless outside the sample space. These tend to be overlooked when visualizing model behaviour. They may be a cause of differing model transferability and environmental change predictions between methods, a problem described in some studies but generally not well understood. We here use a simple simulated data example to illustrate the problem and provide new and complementary visualization techniques to explore model behaviour and predictions to novel environments. We then apply these in a more complex real-world example. Our results underscore the necessity of scrutinizing model fits, ecological theory and environmental novelty.
Density regulation influences population dynamics through its effects on demographic rates and consequently constitutes a key mechanism explaining the response of organisms to environmental changes. Yet, it is difficult to establish the exact form of density dependence from empirical data. Here, we developed an individual-based model to explore how resource limitation and behavioural processes determine the spatial structure of white stork Ciconia ciconia populations and regulate reproductive rates. We found that the form of density dependence differed considerably between landscapes with the same overall resource availability and between home range selection strategies, highlighting the importance of fine-scale resource distribution in interaction with behaviour. In accordance with theories of density dependence, breeding output generally decreased with density but this effect was highly variable and strongly affected by optimal foraging strategy, resource detection probability and colonial behaviour. Moreover, our results uncovered an overlooked consequence of density dependence by showing that high early nestling mortality in storks, assumed to be the outcome of harsh weather, may actually result from density dependent effects on food provision. Our findings emphasize that accounting for interactive effects of individual behaviour and local environmental factors is crucial for understanding density-dependent processes within spatially structured populations. Enhanced understanding of the ways animal populations are regulated in general, and how habitat conditions and behaviour may dictate spatial population structure and demographic rates is critically needed for predicting the dynamics of populations, communities and ecosystems under changing environmental conditions.
Ecologists carry a well-stocked toolbox with a great variety of sampling methods, statistical analyses and modelling tools, and new methods are constantly appearing. Evaluation and optimisation of these methods is crucial to guide methodological choices. Simulating error-free data or taking high-quality data to qualify methods is common practice. Here, we emphasise the methodology of the 'virtual ecologist' (VE) approach where simulated data and observer models are used to mimic real species and how they are 'virtually' observed. This virtual data is then subjected to statistical analyses and modelling, and the results are evaluated against the 'true' simulated data. The VE approach is an intuitive and powerful evaluation framework that allows a quality assessment of sampling protocols, analyses and modelling tools. It works under controlled conditions as well as under consideration of confounding factors such as animal movement and biased observer behaviour. In this review, we promote the approach as a rigorous research tool, and demonstrate its capabilities and practical relevance. We explore past uses of VE in different ecological research fields, where it mainly has been used to test and improve sampling regimes as well as for testing and comparing models, for example species distribution models. We discuss its benefits as well as potential limitations, and provide some practical considerations for designing VE studies. Finally, research fields are identified for which the approach could be useful in the future. We conclude that VE could foster the integration of theoretical and empirical work and stimulate work that goes far beyond sampling methods, leading to new questions, theories, and better mechanistic understanding of ecological systems.
Species respond to environmental change by dynamically adjusting their geographical ranges. Robust predictions of these changes are prerequisites to inform dynamic and sustainable conservation strategies. Correlative species distribution models (SDMs) relate species’ occurrence records to prevailing environmental factors to describe the environmental niche. They have been widely applied in global change context as they have comparably low data requirements and allow for rapid assessments of potential future species’ distributions. However, due to their static nature, transient responses to environmental change are essentially ignored in SDMs. Furthermore, neither dispersal nor demographic processes and biotic interactions are explicitly incorporated. Therefore, it has often been suggested to link statistical and mechanistic modelling approaches in order to make more realistic predictions of species’ distributions for scenarios of environmental change. In this thesis, I present two different ways of such linkage. (i) Mechanistic modelling can act as virtual playground for testing statistical models and allows extensive exploration of specific questions. I promote this ‘virtual ecologist’ approach as a powerful evaluation framework for testing sampling protocols, analyses and modelling tools. Also, I employ such an approach to systematically assess the effects of transient dynamics and ecological properties and processes on the prediction accuracy of SDMs for climate change projections. That way, relevant mechanisms are identified that shape the species’ response to altered environmental conditions and which should hence be considered when trying to project species’ distribution through time. (ii) I supplement SDM projections of potential future habitat for black grouse in Switzerland with an individual-based population model. By explicitly considering complex interactions between habitat availability and demographic processes, this allows for a more direct assessment of expected population response to environmental change and associated extinction risks. However, predictions were highly variable across simulations emphasising the need for principal evaluation tools like sensitivity analysis to assess uncertainty and robustness in dynamic range predictions. Furthermore, I identify data coverage of the environmental niche as a likely cause for contrasted range predictions between SDM algorithms. SDMs may fail to make reliable predictions for truncated and edge niches, meaning that portions of the niche are not represented in the data or niche edges coincide with data limits. Overall, my thesis contributes to an improved understanding of uncertainty factors in predictions of range dynamics and presents ways how to deal with these. Finally I provide preliminary guidelines for predictive modelling of dynamic species’ response to environmental change, identify key challenges for future research and discuss emerging developments.
The ability of some plant species to dominate communities in new biogeographical ranges has been attributed to an innate higher competitive ability and release from co-evolved specialist enemies. Specifically, invasive success in the new range might be explained by release from biotic negative soil-feedbacks, which control potentially dominant species in their native range. To test this hypothesis, we grew individuals from sixteen phylogenetically paired European grassland species that became either invasive or naturalized in new ranges, in either sterilized soil or in sterilized soil with unsterilized soil inoculum from their native home range. We found that although the native members of invasive species generally performed better than those of naturalized species, these native members of invasive species also responded more negatively to native soil inoculum than did the native members of naturalized species. This supports our hypothesis that potentially invasive species in their native range are held in check by negative soil-feedbacks. However, contrary to expectation, negative soil-feedbacks in potentially invasive species were not much increased by interspecific competition. There was no significant variation among families between invasive and naturalized species regarding their feedback response (negative vs. neutral). Therefore, we conclude that the observed negative soil feedbacks in potentially invasive species may be quite widespread in European families of typical grassland species.
Bacterial molybdoenzymes are key enzymes involved in the global sulphur, nitrogen and carbon cycles. These enzymes require the insertion of the molybdenum cofactor (Moco) into their active sites and are able to catalyse a large range of redox-reactions. Escherichia coli harbours nineteen different molybdoenzymes that require a tight regulation of their synthesis according to substrate availability, oxygen availability and the cellular concentration of molybdenum and iron. The synthesis and assembly of active molybdoenzymes are regulated at the level of transcription of the structural genes and of translation in addition to the genes involved in Moco biosynthesis. The action of global transcriptional regulators like FNR, NarXL/QP, Fur and ArcA and their roles on the expression of these genes is described in detail. In this review we focus on what is known about the molybdenum- and iron-dependent regulation of molybdoenzyme and Moco biosynthesis genes in the model organism E. coli. The gene regulation in E. coli is compared to two other well studied model organisms Rhodobacter capsulatus and Shewanella oneidensis.
Molybdenum cofactor (Moco) biosynthesis is a complex process that involves the coordinated function of several proteins. In recent years it has become obvious that the availability of iron plays an important role in the biosynthesis of Moco. First, the MoaA protein binds two (4Fe-4S] clusters per monomer. Second, the expression of the moaABCDE and moeAB operons is regulated by FNR, which senses the availability of oxygen via a functional NFe-4S) cluster. Finally, the conversion of cyclic pyranopterin monophosphate to molybdopterin requires the availability of the L-cysteine desulfurase IscS, which is a shared protein with a main role in the assembly of Fe-S clusters. In this report, we investigated the transcriptional regulation of the moaABCDE operon by focusing on its dependence on cellular iron availability. While the abundance of selected molybdoenzymes is largely decreased under iron-limiting conditions, our data show that the regulation of the moaABCDE operon at the level of transcription is only marginally influenced by the availability of iron. Nevertheless, intracellular levels of Moco were decreased under iron-limiting conditions, likely based on an inactive MoaA protein in addition to lower levels of the L-cysteine desulfurase IscS, which simultaneously reduces the sulfur availability for Moco production. IMPORTANCE FNR is a very important transcriptional factor that represents the master switch for the expression of target genes in response to anaerobiosis. Among the FNR-regulated operons in Escherichia coli is the moaABCDE operon, involved in Moco biosynthesis. Molybdoenzymes have essential roles in eukaryotic and prokaryotic organisms. In bacteria, molybdoenzymes are crucial for anaerobic respiration using alternative electron acceptors. This work investigates the connection of iron availability to the biosynthesis of Moco and the production of active molybdoenzymes.
The c-Fosc-Jun complex forms the activator protein 1 transcription factor, a therapeutic target in the treatment of cancer. Various synthetic peptides have been designed to try to selectively disrupt the interaction between c-Fos and c-Jun at its leucine zipper domain. To evaluate the binding affinity between these synthetic peptides and c-Fos, polarizable and nonpolarizable molecular dynamics (MD) simulations were conducted, and the resulting conformations were analyzed using the molecular mechanics generalized Born surface area (MM/GBSA) method to compute free energies of binding. In contrast to empirical and semiempirical approaches, the estimation of free energies of binding using a combination of MD simulations and the MM/GBSA approach takes into account dynamical properties such as conformational changes, as well as solvation effects and hydrophobic and hydrophilic interactions. The predicted binding affinities of the series of c-Jun-based peptides targeting the c-Fos peptide show good correlation with experimental melting temperatures. This provides the basis for the rational design of peptides based on internal, van der Waals, and electrostatic interactions.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
State-of-the-art organic solar cells exhibit power conversion efficiencies of 18% and above. These devices benefit from the suppression of free charge recombination with regard to the Langevin limit of charge encounter in a homogeneous medium. It is recognized that the main cause of suppressed free charge recombination is the reformation and resplitting of charge-transfer (CT) states at the interface between donor and acceptor domains. Here, we use kinetic Monte Carlo simulations to understand the interplay between free charge motion and recombination in an energetically disordered phase-separated donor-acceptor blend. We identify conditions for encounter-dominated and resplitting-dominated recombination. In the former regime, recombination is proportional to mobility for all parameters tested and only slightly reduced with respect to the Langevin limit. In contrast, mobility is not the decisive parameter that determines the nongeminate recombination coefficient, k(2), in the latter case, where k2 is a sole function of the morphology, CT and charge-separated (CS) energetics, and CT-state decay properties. Our simulations also show that free charge encounter in the phase-separated disordered blend is determined by the average mobility of all carriers, while CT reformation and resplitting involves mostly states near the transport energy. Therefore, charge encounter is more affected by increased disorder than the resplitting of the CT state. As a consequence, for a given mobility, larger energetic disorder, in combination with a higher hopping rate, is preferred. These findings have implications for the understanding of suppressed recombination in solar cells with nonfullerene acceptors, which are known to exhibit lower energetic disorder than that of fullerenes.
Explicit solution of the Lindblad equation for nearly isotropic boundary driven XY spin 1/2 chain
(2010)
Explicit solution for the two-point correlation function in a non-equilibrium steady state of a nearly isotropic boundary driven open XY spin 1/2 chain in the Lindblad formulation is provided. A non-equilibrium quantum phase transition from exponentially decaying correlations to long range order is discussed analytically. In the regime of long range order a new phenomenon of correlation resonances is reported, where the correlation response of the system is unusually high for certain discrete values of the external bulk parameter, e.g. the magnetic field.
Background: Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results: The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions: The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
The Arabidopsis Kinome
(2014)
Background
Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results
The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions
The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
Decoupling of optical properties appears challenging, but vital to get better insight of the relationship between light and fruit attributes. In this study, nine solid phantoms capturing the ranges of absorption (μa) and reduced scattering (μs’) coefficients in fruit were analysed non-destructively using laser-induced backscattering imaging (LLBI) at 1060 nm. Data analysis of LLBI was carried out on the diffuse reflectance, attenuation profile obtained by means of Farrell’s diffusion theory either calculating μa [cm−1] and μs’ [cm−1] in one fitting step or fitting only one optical variable and providing the other one from a destructive analysis. The nondestructive approach was approved when calculating one unknown coefficient non-destructively, while no ability of the method was found to analysis both, μa and μs’, non-destructively. Setting μs’ according to destructive photon density wave (PDW) spectroscopy and fitting μa resulted in root mean square error (rmse) of 18.7% in comparison to fitting μs’ resulting in rmse of 2.6%, pointing to decreased measuring uncertainty, when the highly variable μa was known.
The approach was tested on European pear, utilizing destructive PDW spectroscopy for setting one variable, while LLBI was applied for calculating the remaining coefficient. Results indicated that the optical properties of pear obtained from PDW spectroscopy as well as LLBI changed concurrently in correspondence to water content mainly. A destructive batch-wise analysis of μs’ and online analysis of μa may be considered in future developments for improved fruit sorting results, when considering fruit with high variability of μs’.
In high-value sweet cherry (Prunus avium), the red coloration - determined by the anthocyanins content - is correlated with the fruit ripeness stage and market value. Non-destructive spectroscopy has been introduced in practice and may be utilized as a tool to assess the fruit pigments in the supply chain processes. From the fruit spectrum in the visible (Vis) wavelength range, the pigment contents are analyzed separately at their specific absorbance wavelengths.
A drawback of the method is the need for re-calibration due to varying optical properties of the fruit tissue. In order to correct for the scattering differences, most often the spectral intensity in the visible spectrum is normalized by wavelengths in the near infrared (NIR) range, or pre-processing methods are applied in multivariate calibrations.
In the present study, the influence of the fruit scattering properties on the Vis/NIR fruit spectrum were corrected by the effective pathlength in the fruit tissue obtained from time-resolved readings of the distribution of time-of-flight (DTOF). Pigment analysis was carried out according to Lambert-Beer law, considering fruit spectral intensities, effective pathlength, and refractive index. Results were compared to commonly applied linear color and multivariate partial least squares (PLS) regression analysis. The approaches were validated on fruits at different ripeness stages, providing variation in the scattering coefficient and refractive index exceeding the calibration sample set.
In the validation, the measuring uncertainty of non-destructively analyzing fruits with Vis/NIR spectra by means of PLS or Lambert-Beer in comparison with combined application of Vis/NIR spectroscopy and DTOF measurements showed a dramatic bias reduction as well as enhanced coefficients of determination when using both, the spectral intensities and apparent information on the scattering influence by means of DTOF readings. Corrections for the refractive index did not render improved results.