Refine
Has Fulltext
- no (20976) (remove)
Year of publication
Document Type
- Article (20976) (remove)
Language
- English (20976) (remove)
Keywords
- climate change (95)
- Germany (74)
- stars: massive (58)
- diffusion (51)
- stars: early-type (47)
- gamma rays: general (46)
- stars: winds, outflows (45)
- Arabidopsis thaliana (44)
- German (44)
- Climate change (43)
Institute
- Institut für Physik und Astronomie (4038)
- Institut für Biochemie und Biologie (3441)
- Institut für Geowissenschaften (2651)
- Institut für Chemie (2251)
- Department Psychologie (1137)
- Institut für Mathematik (953)
- Institut für Ernährungswissenschaft (737)
- Department Linguistik (685)
- Institut für Umweltwissenschaften und Geographie (560)
- Institut für Informatik und Computational Science (535)
Background: Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results: The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions: The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Explicit solution of the Lindblad equation for nearly isotropic boundary driven XY spin 1/2 chain
(2010)
Explicit solution for the two-point correlation function in a non-equilibrium steady state of a nearly isotropic boundary driven open XY spin 1/2 chain in the Lindblad formulation is provided. A non-equilibrium quantum phase transition from exponentially decaying correlations to long range order is discussed analytically. In the regime of long range order a new phenomenon of correlation resonances is reported, where the correlation response of the system is unusually high for certain discrete values of the external bulk parameter, e.g. the magnetic field.
State-of-the-art organic solar cells exhibit power conversion efficiencies of 18% and above. These devices benefit from the suppression of free charge recombination with regard to the Langevin limit of charge encounter in a homogeneous medium. It is recognized that the main cause of suppressed free charge recombination is the reformation and resplitting of charge-transfer (CT) states at the interface between donor and acceptor domains. Here, we use kinetic Monte Carlo simulations to understand the interplay between free charge motion and recombination in an energetically disordered phase-separated donor-acceptor blend. We identify conditions for encounter-dominated and resplitting-dominated recombination. In the former regime, recombination is proportional to mobility for all parameters tested and only slightly reduced with respect to the Langevin limit. In contrast, mobility is not the decisive parameter that determines the nongeminate recombination coefficient, k(2), in the latter case, where k2 is a sole function of the morphology, CT and charge-separated (CS) energetics, and CT-state decay properties. Our simulations also show that free charge encounter in the phase-separated disordered blend is determined by the average mobility of all carriers, while CT reformation and resplitting involves mostly states near the transport energy. Therefore, charge encounter is more affected by increased disorder than the resplitting of the CT state. As a consequence, for a given mobility, larger energetic disorder, in combination with a higher hopping rate, is preferred. These findings have implications for the understanding of suppressed recombination in solar cells with nonfullerene acceptors, which are known to exhibit lower energetic disorder than that of fullerenes.
The c-Fosc-Jun complex forms the activator protein 1 transcription factor, a therapeutic target in the treatment of cancer. Various synthetic peptides have been designed to try to selectively disrupt the interaction between c-Fos and c-Jun at its leucine zipper domain. To evaluate the binding affinity between these synthetic peptides and c-Fos, polarizable and nonpolarizable molecular dynamics (MD) simulations were conducted, and the resulting conformations were analyzed using the molecular mechanics generalized Born surface area (MM/GBSA) method to compute free energies of binding. In contrast to empirical and semiempirical approaches, the estimation of free energies of binding using a combination of MD simulations and the MM/GBSA approach takes into account dynamical properties such as conformational changes, as well as solvation effects and hydrophobic and hydrophilic interactions. The predicted binding affinities of the series of c-Jun-based peptides targeting the c-Fos peptide show good correlation with experimental melting temperatures. This provides the basis for the rational design of peptides based on internal, van der Waals, and electrostatic interactions.
Molybdenum cofactor (Moco) biosynthesis is a complex process that involves the coordinated function of several proteins. In recent years it has become obvious that the availability of iron plays an important role in the biosynthesis of Moco. First, the MoaA protein binds two (4Fe-4S] clusters per monomer. Second, the expression of the moaABCDE and moeAB operons is regulated by FNR, which senses the availability of oxygen via a functional NFe-4S) cluster. Finally, the conversion of cyclic pyranopterin monophosphate to molybdopterin requires the availability of the L-cysteine desulfurase IscS, which is a shared protein with a main role in the assembly of Fe-S clusters. In this report, we investigated the transcriptional regulation of the moaABCDE operon by focusing on its dependence on cellular iron availability. While the abundance of selected molybdoenzymes is largely decreased under iron-limiting conditions, our data show that the regulation of the moaABCDE operon at the level of transcription is only marginally influenced by the availability of iron. Nevertheless, intracellular levels of Moco were decreased under iron-limiting conditions, likely based on an inactive MoaA protein in addition to lower levels of the L-cysteine desulfurase IscS, which simultaneously reduces the sulfur availability for Moco production. IMPORTANCE FNR is a very important transcriptional factor that represents the master switch for the expression of target genes in response to anaerobiosis. Among the FNR-regulated operons in Escherichia coli is the moaABCDE operon, involved in Moco biosynthesis. Molybdoenzymes have essential roles in eukaryotic and prokaryotic organisms. In bacteria, molybdoenzymes are crucial for anaerobic respiration using alternative electron acceptors. This work investigates the connection of iron availability to the biosynthesis of Moco and the production of active molybdoenzymes.
The ability of some plant species to dominate communities in new biogeographical ranges has been attributed to an innate higher competitive ability and release from co-evolved specialist enemies. Specifically, invasive success in the new range might be explained by release from biotic negative soil-feedbacks, which control potentially dominant species in their native range. To test this hypothesis, we grew individuals from sixteen phylogenetically paired European grassland species that became either invasive or naturalized in new ranges, in either sterilized soil or in sterilized soil with unsterilized soil inoculum from their native home range. We found that although the native members of invasive species generally performed better than those of naturalized species, these native members of invasive species also responded more negatively to native soil inoculum than did the native members of naturalized species. This supports our hypothesis that potentially invasive species in their native range are held in check by negative soil-feedbacks. However, contrary to expectation, negative soil-feedbacks in potentially invasive species were not much increased by interspecific competition. There was no significant variation among families between invasive and naturalized species regarding their feedback response (negative vs. neutral). Therefore, we conclude that the observed negative soil feedbacks in potentially invasive species may be quite widespread in European families of typical grassland species.
Ecologists carry a well-stocked toolbox with a great variety of sampling methods, statistical analyses and modelling tools, and new methods are constantly appearing. Evaluation and optimisation of these methods is crucial to guide methodological choices. Simulating error-free data or taking high-quality data to qualify methods is common practice. Here, we emphasise the methodology of the 'virtual ecologist' (VE) approach where simulated data and observer models are used to mimic real species and how they are 'virtually' observed. This virtual data is then subjected to statistical analyses and modelling, and the results are evaluated against the 'true' simulated data. The VE approach is an intuitive and powerful evaluation framework that allows a quality assessment of sampling protocols, analyses and modelling tools. It works under controlled conditions as well as under consideration of confounding factors such as animal movement and biased observer behaviour. In this review, we promote the approach as a rigorous research tool, and demonstrate its capabilities and practical relevance. We explore past uses of VE in different ecological research fields, where it mainly has been used to test and improve sampling regimes as well as for testing and comparing models, for example species distribution models. We discuss its benefits as well as potential limitations, and provide some practical considerations for designing VE studies. Finally, research fields are identified for which the approach could be useful in the future. We conclude that VE could foster the integration of theoretical and empirical work and stimulate work that goes far beyond sampling methods, leading to new questions, theories, and better mechanistic understanding of ecological systems.
Density regulation influences population dynamics through its effects on demographic rates and consequently constitutes a key mechanism explaining the response of organisms to environmental changes. Yet, it is difficult to establish the exact form of density dependence from empirical data. Here, we developed an individual-based model to explore how resource limitation and behavioural processes determine the spatial structure of white stork Ciconia ciconia populations and regulate reproductive rates. We found that the form of density dependence differed considerably between landscapes with the same overall resource availability and between home range selection strategies, highlighting the importance of fine-scale resource distribution in interaction with behaviour. In accordance with theories of density dependence, breeding output generally decreased with density but this effect was highly variable and strongly affected by optimal foraging strategy, resource detection probability and colonial behaviour. Moreover, our results uncovered an overlooked consequence of density dependence by showing that high early nestling mortality in storks, assumed to be the outcome of harsh weather, may actually result from density dependent effects on food provision. Our findings emphasize that accounting for interactive effects of individual behaviour and local environmental factors is crucial for understanding density-dependent processes within spatially structured populations. Enhanced understanding of the ways animal populations are regulated in general, and how habitat conditions and behaviour may dictate spatial population structure and demographic rates is critically needed for predicting the dynamics of populations, communities and ecosystems under changing environmental conditions.
Empirical species distribution models (SDMs) constitute often the tool of choice for the assessment of rapid climate change effects on species vulnerability. Conclusions regarding extinction risks might be misleading, however, because SDMs do not explicitly incorporate dispersal or other demographic processes. Here, we supplement SDMs with a dynamic population model 1) to predict climate-induced range dynamics for black grouse in Switzerland, 2) to compare direct and indirect measures of extinction risks, and 3) to quantify uncertainty in predictions as well as the sources of that uncertainty. To this end, we linked models of habitat suitability to a spatially explicit, individual-based model. In an extensive sensitivity analysis, we quantified uncertainty in various model outputs introduced by different SDM algorithms, by different climate scenarios and by demographic model parameters. Potentially suitable habitats were predicted to shift uphill and eastwards. By the end of the 21st century, abrupt habitat losses were predicted in the western Prealps for some climate scenarios. In contrast, population size and occupied area were primarily controlled by currently negative population growth and gradually declined from the beginning of the century across all climate scenarios and SDM algorithms. However, predictions of population dynamic features were highly variable across simulations. Results indicate that inferring extinction probabilities simply from the quantity of suitable habitat may underestimate extinction risks because this may ignore important interactions between life history traits and available habitat. Also, in dynamic range predictions uncertainty in SDM algorithms and climate scenarios can become secondary to uncertainty in dynamic model components. Our study emphasises the need for principal evaluation tools like sensitivity analysis in order to assess uncertainty and robustness in dynamic range predictions. A more direct benefit of such robustness analysis is an improved mechanistic understanding of dynamic species responses to climate change.
SDM performance varied for different range dynamics. Prediction accuracies decreased when abrupt range shifts occurred as species were outpaced by the rate of climate change, and increased again when a new equilibrium situation was realised. When ranges contracted, prediction accuracies increased as the absences were predicted well. Far- dispersing species were faster in tracking climate change, and were predicted more accurately by SDMs than short- dispersing species. BRTs mostly outperformed GLMs. The presence of a predator, and the inclusion of its incidence as an environmental predictor, made BRTs and GLMs perform similarly. Results are discussed in light of other studies dealing with effects of ecological traits and processes on SDM performance. Perspectives are given on further advancements of SDMs and for possible interfaces with more mechanistic approaches in order to improve predictions under environmental change.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
Home range size and resource use of breeding and non-breeding white storks along a land use gradient
(2018)
Biotelemetry is increasingly used to study animal movement at high spatial and temporal resolution and guide conservation and resource management. Yet, limited sample sizes and variation in space and habitat use across regions and life stages may compromise robustness of behavioral analyses and subsequent conservation plans. Here, we assessed variation in (i) home range sizes, (ii) home range selection, and (iii) fine-scale resource selection of white storks across breeding status and regions and test model transferability. Three study areas were chosen within the Central German breeding grounds ranging from agricultural to fluvial and marshland. We monitored GPS-locations of 62 adult white storks equipped with solar-charged GPS/3D-acceleration (ACC) transmitters in 2013-2014. Home range sizes were estimated using minimum convex polygons. Generalized linear mixed models were used to assess home range selection and fine-scale resource selection by relating the home ranges and foraging sites to Corine habitat variables and normalized difference vegetation index in a presence/pseudo-absence design. We found strong variation in home range sizes across breeding stages with significantly larger home ranges in non-breeding compared to breeding white storks, but no variation between regions. Home range selection models had high explanatory power and well predicted overall density of Central German white stork breeding pairs. Also, they showed good transferability across regions and breeding status although variable importance varied considerably. Fine-scale resource selection models showed low explanatory power. Resource preferences differed both across breeding status and across regions, and model transferability was poor. Our results indicate that habitat selection of wild animals may vary considerably within and between populations, and is highly scale dependent. Thereby, home range scale analyses show higher robustness whereas fine-scale resource selection is not easily predictable and not transferable across life stages and regions. Such variation may compromise management decisions when based on data of limited sample size or limited regional coverage. We thus recommend home range scale analyses and sampling designs that cover diverse regional landscapes and ensure robust estimates of habitat suitability to conserve wild animal populations.
We present a momentum transfer mechanism mediated by electromagnetic fields that originates in a system of two nearby molecules: one excited (donor D*) and the other in ground state (acceptor A). An intermolecular force related to fluorescence resonant energy or Forster transfer (FRET) arises in the unstable D* A molecular system, which differs from the equilibrium van der Waals interaction. Due to the its finite lifetime, a mechanical impulse is imparted to the relative motion in the system. We analyze the FRET impulse when the molecules are embedded in free space and find that its magnitude can be much greater than the single recoil photon momentum, getting comparable with the thermal momentum (Maxwell-Boltzmann distribution) at room temperature. In addition, we propose that this FRET impulse can be exploited in the generation of acoustic waves inside a film containing layers of donor and acceptor molecules, when a picosecond laser pulse excites the donors. This acoustic transient is distinguishable from that produced by thermal stress due to laser absorption, and may therefore play a role in photoacoustic spectroscopy. The effect can be seen as exciting a vibrating system like a string or organ pipe with light; it may be used as an opto-mechanical transducer.
We present a theoretical framework for the analysis of the statistical properties of thermal fluctuations on a lossy transmission line. A quantization scheme of the electrical signals in the transmission line is formulated. We discuss two applications in detail. Noise spectra at finite temperature for voltage and current are shown to deviate significantly from the Johnson-Nyquist limit, and they depend on the position on the transmission line. We analyze the spontaneous emission, at low temperature, of a Rydberg atom and its resonant enhancement due to vacuum fluctuations in a capacitively coupled transmission line. The theory can also be applied to study the performance of microscale and nanoscale devices, including high-resolution sensors and quantum information processors
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
During hopping an early burst can be observed in the EMG from the soleus muscle starting about 45 ms after touch-down. It may be speculated that this early EMG burst is a stretch reflex response superimposed on activity from a supra-spinal origin. We hypothesised that if a stretch reflex indeed contributes to the early EMG burst, then advancing or delaying the touch-down without the subject's knowledge should similarly advance or delay the burst. This was indeed the case when touch-down was advanced or delayed by shifting the height of a programmable platform up or down between two hops and this resulted in a correspondent shift of the early EMG burst. Our second hypothesis was that the motor cortex contributes to the first EMG burst during hopping. If so, inhibition of the motor cortex would reduce the magnitude of the burst. By applying a low-intensity magnetic stimulus it was possible to inhibit the motor cortex and this resulted in a suppression of the early EMG burst. These results suggest that sensory feedback and descending drive from the motor cortex are integrated to drive the motor neuron pool during the early EMG burst in hopping. Thus, simple reflexes work in concert with higher order structures to produce this repetitive movement.
We recently demonstrated that the sympathetic nervous system can be voluntarily activated following a training program consisting of cold exposure, breathing exercises, and meditation. This resulted in profound attenuation of the systemic inflammatory response elicited by lipopolysaccharide (LPS) administration. Herein, we assessed whether this training program affects the plasma metabolome and if these changes are linked to the immunomodulatory effects observed. A total of 224 metabolites were identified in plasma obtained from 24 healthy male volunteers at six timepoints, of which 98 were significantly altered following LPS administration. Effects of the training program were most prominent shortly after initiation of the acquired breathing exercises but prior to LPS administration, and point towards increased activation of the Cori cycle. Elevated concentrations of lactate and pyruvate in trained individuals correlated with enhanced levels of anti-inflammatory interleukin (IL)-10. In vitro validation experiments revealed that co-incubation with lactate and pyruvate enhances IL-10 production and attenuates the release of pro-inflammatory IL-1 beta and IL-6 by LPS-stimulated leukocytes. Our results demonstrate that practicing the breathing exercises acquired during the training program results in increased activity of the Cori cycle. Furthermore, this work uncovers an important role of lactate and pyruvate in the anti-inflammatory phenotype observed in trained subjects.
Alternaria (A.) is a genus of widespread fungi capable of producing numerous, possibly health-endangering Alternaria toxins (ATs), which are usually not the focus of attention. The formation of ATs depends on the species and complex interactions of various environmental factors and is not fully understood. In this study the influence of temperature (7 degrees C, 25 degrees C), substrate (rice, wheat kernels) and incubation time (4, 7, and 14 days) on the production of thirteen ATs and three sulfoconjugated ATs by three different Alternaria isolates from the species groups A. tenuissima and A. infectoria was determined. High-performance liquid chromatography coupled with tandem mass spectrometry was used for quantification. Under nearly all conditions, tenuazonic acid was the most extensively produced toxin. At 25 degrees C and with increasing incubation time all toxins were formed in high amounts by the two A. tenuissima strains on both substrates with comparable mycotoxin profiles. However, for some of the toxins, stagnation or a decrease in production was observed from day 7 to 14. As opposed to the A. tenuissima strains, the A. infectoria strain only produced low amounts of ATs, but high concentrations of stemphyltoxin III. The results provide an essential insight into the quantitative in vitro AT formation under different environmental conditions, potentially transferable to different field and storage conditions.
Necrotrophic as well as saprophytic small-spored Altemaria (A.) species are annually responsible for major losses of agricultural products, such as cereal crops, associated with the contamination of food and feedstuff with potential health-endangering Altemaria toxins. Knowledge of the metabolic capabilities of different species-groups to form mycotoxins is of importance for a reliable risk assessment. 93 Altemaria strains belonging to the four species groups Alternaria tenuissima, A. arborescens, A. altemata, and A. infectoria were isolated from winter wheat kernels harvested from fields in Germany and Russia and incubated under equal conditions. Chemical analysis by means of an HPLC-MS/MS multi-Alternaria-toxin-method showed that 95% of all strains were able to form at least one of the targeted 17 non-host specific Altemaria toxins. Simultaneous production of up to 15 (modified) Altemaria toxins by members of the A. tenuissima, A. arborescens, A. altemata species-groups and up to seven toxins by A. infectoria strains was demonstrated. Overall tenuazonic acid was the most extensively formed mycotoxin followed by alternariol and alternariol mono methylether, whereas altertoxin I was the most frequently detected toxin. Sulfoconjugated modifications of alternariol, alternariol mono methylether, altenuisol and altenuene were frequently determined. Unknown perylene quinone derivatives were additionally detected. Strains of the species-group A. infectoria could be segregated from strains of the other three species-groups due to significantly lower toxin levels and the specific production of infectopyrone. Apart from infectopyrone, alterperylenol was also frequently produced by 95% of the A. infectoria strains. Neither by the concentration nor by the composition of the targeted Altemaria toxins a differentiation between the species-groups A. altemata, A. tenuissima and A. arborescens was possible.
Sub-seasonal thaw slump mass wasting is not consistently energy limited at the landscape scale
(2018)
Predicting future thaw slump activity requires a sound understanding of the atmospheric drivers and geomorphic controls on mass wasting across a range of timescales. On sub-seasonal timescales, sparse measurements indicate that mass wasting at active slumps is often limited by the energy available for melting ground ice, but other factors such as rainfall or the formation of an insulating veneer may also be relevant. To study the sub-seasonal drivers, we derive topographic changes from single-pass radar interferometric data acquired by the TanDEM-X satellites. The estimated elevation changes at 12m resolution complement the commonly observed planimetric retreat rates by providing information on volume losses. Their high vertical precision (around 30 cm), frequent observations (11 days) and large coverage (5000 km(2)) allow us to track mass wasting as drivers such as the available energy change during the summer of 2015 in two study regions. We find that thaw slumps in the Tuktoyaktuk coastlands, Canada, are not energy limited in June, as they undergo limited mass wasting (height loss of around 0 cm day 1) despite the ample available energy, suggesting the widespread presence of early season insulating snow or debris veneer. Later in summer, height losses generally increase (around 3 cm day 1), but they do so in distinct ways. For many slumps, mass wasting tracks the available energy, a temporal pattern that is also observed at coastal yedoma cliffs on the Bykovsky Peninsula, Russia. However, the other two common temporal trajectories are asynchronous with the available energy, as they track strong precipitation events or show a sudden speed-up in late August respectively. The observed temporal patterns are poorly related to slump characteristics like the headwall height. The contrasting temporal behaviour of nearby thaw slumps highlights the importance of complex local and temporally varying controls on mass wasting.
Hysteresis in the pinning-depinning transitions of spiral waves rotating around a hole in a circular shaped two- dimensional excitable medium is studied both by use of the continuation software AUTO and by direct numerical integration of the reaction-diffusion equations for the FitzHugh-Nagumo model. In order to clarify the role of different factors in this phenomenon, a kinematical description is applied. It is found that the hysteresis phenomenon computed for the reaction-diffusion model can be reproduced qualitatively only when a nonlinear eikonal equation (i.e. velocity- curvature relationship) is assumed. However, to obtain quantitative agreement, the dispersion relation has to be taken into account.
The paper studies catalytic super-Brownian motion on the real line, where the branching rate is controlled by a catalyst. D. A. Dawson, K. Fleischmann and S. Roelly showed, for a broad class of catalysts, that, as for constant branching, the processes are absolutely continuous measures. This paper considers a class of catalysts, called moderate, which must satisfy a uniform boundedness condition and a condition controlling the degree of singularity---essentially that the mass of catalyst in small balls should (uniformly) be of order r^a, where a>0. The main result of this paper shows that for this class of catalysts there is a continuous density field for the process. Moreover the density is the unique solution (in law) of an appropriate SPDE.
The author considers the heat equation in dimension one with singular drift and inhomogeneous space-time white noise. In particular, the quadratic variation measure of the white noise is not required to be absolutely continuous w.r.t. the Lebesgue measure, neither in space nor in time. Under some assumptions the author gives statements on strong and weak existence as well as strong and weak uniqueness of continuous solutions.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
Convergence of the frequency-magnitude distribution of global earthquakes - maybe in 200 years
(2013)
I study the ability to estimate the tail of the frequency-magnitude distribution of global earthquakes. While power-law scaling for small earthquakes is accepted by support of data, the tail remains speculative. In a recent study, Bell et al. (2013) claim that the frequency-magnitude distribution of global earthquakes converges to a tapered Pareto distribution. I show that this finding results from data fitting errors, namely from the biased maximum likelihood estimation of the corner magnitude theta in strongly undersampled models. In particular, the estimation of theta depends solely on the few largest events in the catalog. Taking this into account, I compare various state-of-the-art models for the global frequency-magnitude distribution. After discarding undersampled models, the remaining ones, including the unbounded Gutenberg-Richter distribution, perform all equally well and are, therefore, indistinguishable. Convergence to a specific distribution, if it ever takes place, requires about 200 years homogeneous recording of global seismicity, at least.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.
The knowledge of the largest expected earthquake magnitude in a region is one of the key issues in probabilistic seismic hazard calculations and the estimation of worst-case scenarios. Earthquake catalogues are the most informative source of information for the inference of earthquake magnitudes. We analysed the earthquake catalogue for Central Asia with respect to the largest expected magnitudes m(T) in a pre-defined time horizon T-f using a recently developed statistical methodology, extended by the explicit probabilistic consideration of magnitude errors. For this aim, we assumed broad error distributions for historical events, whereas the magnitudes of recently recorded instrumental earthquakes had smaller errors. The results indicate high probabilities for the occurrence of large events (M >= 8), even in short time intervals of a few decades. The expected magnitudes relative to the assumed maximum possible magnitude are generally higher for intermediate-depth earthquakes (51-300 km) than for shallow events (0-50 km). For long future time horizons, for example, a few hundred years, earthquakes with M >= 8.5 have to be taken into account, although, apart from the 1889 Chilik earthquake, it is probable that no such event occurred during the observation period of the catalogue.
The increasing development of antibiotic resistance in bacteria has been a major problem for years, both in human and veterinary medicine. Prophylactic measures, such as the use of vaccines, are of great importance in reducing the use of antibiotics in livestock. These vaccines are mainly produced based on formaldehyde inactivation. However, the latter damages the recognition elements of the bacterial proteins and thus could reduce the immune response in the animal. An alternative inactivation method developed in this work is based on gentle photodynamic inactivation using carbon nanodots (CNDs) at excitation wavelengths λex > 290 nm. The photodynamic inactivation was characterized on the nonvirulent laboratory strain Escherichia coli K12 using synthesized CNDs. For a gentle inactivation, the CNDs must be absorbed into the cytoplasm of the E. coli cell. Thus, the inactivation through photoinduced formation of reactive oxygen species only takes place inside the bacterium, which means that the outer membrane is neither damaged nor altered. The loading of the CNDs into E. coli was examined using fluorescence microscopy. Complete loading of the bacterial cells could be achieved in less than 10 min. These studies revealed a reversible uptake process allowing the recovery and reuse of the CNDs after irradiation and before the administration of the vaccine. The success of photodynamic inactivation was verified by viability assays on agar. In a homemade flow photoreactor, the fastest successful irradiation of the bacteria could be carried out in 34 s. Therefore, the photodynamic inactivation based on CNDs is very effective. The membrane integrity of the bacteria after irradiation was verified by slide agglutination and atomic force microscopy. The method developed for the laboratory strain E. coli K12 could then be successfully applied to the important avian pathogens Bordetella avium and Ornithobacterium rhinotracheale to aid the development of novel vaccines.
The combination of high-performance liquid chromatography and electrospray ionization ion mobility spectrometry facilitates the two-dimensional separation of complex mixtures in the retention and drift time plane. The ion mobility spectrometer presented here was optimized for flow rates customarily used in high-performance liquid chromatography between 100 and 1500 mu L/min. The characterization of the system with respect to such parameters as the peak capacity of each time dimension and of the 2D spectrum was carried out based on a separation of a pesticide mixture containing 24 substances. While the total ion current chromatogram is coarsely resolved, exhibiting coelutions for a number of compounds, all substances can be separately detected in the 2D plane due to the orthogonality of the separations in retention and drift dimensions. Another major advantage of the ion mobility detector is the identification of substances based on their characteristic mobilities. Electrospray ionization allows the detection of substances lacking a chromophore. As an example, the separation of a mixture of 18 amino acids is presented. A software built upon the free mass spectrometry package OpenMS was developed for processing the extensive 2D data. The different processing steps are implemented as separate modules which can be arranged in a graphic workflow facilitating automated processing of data.
The application of electrospray ionization (ESI) ion mobility (IM) spectrometry on the detection end of a high-performance liquid chromatograph has been a subject of study for some time. So far, this method has been limited to low flow rates or has required splitting of the liquid flow. This work presents a novel concept of an ESI source facilitating the stable operation of the spectrometer at flow rates between 10 mu L min(-1) and 1500 mu L min(-1) without flow splitting, advancing the T-cylinder design developed by Kurnin and co-workers. Flow rates eight times faster than previously reported were achieved because of a more efficient dispersion of the liquid at increased electrospray voltages combined with nebulization by a sheath gas. Imaging revealed the spray operation to be in a rotationally symmetric multijet-mode. The novel ESI-IM spectrometer tolerates high water contents (<= 90%) and electrolyte concentrations up to 10 mM, meeting another condition required of high-performance liquid chromatography (HPLC) detectors. Limits of detection of 50 nM for promazine in the positive mode and 1 mu M for 1,3-dinitrobenzene in the negative mode were established. Three mixtures of reduced complexity (five surfactants, four neuroleptics, and two isomers) were separated in the millisecond regime in stand-alone operation of the spectrometer. Separations of two more complex mixtures (five neuroleptics and 13 pesticides) demonstrate the application of the spectrometer as an HPLC detector. The examples illustrate the advantages of the spectrometer over the established diode array detector, in terms of additional IM separation of substances not fully separated in the retention time domain as well as identification of substances based on their characteristic IMs.
The capability of electrospray ionization (ESI)-ion mobility (IM) spectrometry for reaction monitoring is assessed both as a stand-alone real-time technique and in combination with HPLC. A three-step chemical reaction, consisting of a Williamson ether synthesis followed by a hydrogenation and an N-alkylation step, is chosen for demonstration. Intermediates and products are determined with a drift time to mass-per-charge correlation. Addition of an HPLC column to the setup increases the separation power and allows the determination of further species. Monitoring of the intensities of the various species over the reaction time allows the detection of the end of reaction, determination of the rate-limiting step, observation of the system response in discontinuous processes, and optimization of the mass ratios of the starting materials. However, charge competition in ESI influences the quantitative detection of substances in the reaction mixture. Therefore, two different methods are investigated, which allow the quantification and investigation of reaction kinetics. The first method is based on the pre-separation of the compounds on an HPLC column and their subsequent individual detection in the ESI-IM spectrometer. The second method involves an extended calibration procedure, which considers charge competition effects and facilitates nearly real-time quantification.
The pressure dependence of sheath gas assisted electrospray ionization (ESI) was investigated based on two complementary experimental setups, namely an ESI-ion mobility (IM) spectrometer and an ESI capillary - Faraday plate setup housed in an optically accessible vacuum chamber. The ESI-IM spectrometer is capable of working in the pressure range between 300 and 1000 mbar. Another aim was the assessment of the analytical capabilities of a subambient pressure ESI-IM spectrometer. The pressure dependence of ESI was characterized by imaging the electrospray and recording current-voltage (I-U) curves. Qualitatively different behavior was observed in both setups. While the current rises continuously with the voltage in the capillary-plate setup, a sharp increase of the current was measured in the IM spectrometer above a pressure-dependent threshold voltage. The different character can be attributed to the detection of different species in both experiments. In the capillary-plate experiment, a multitude of charged species are detected while only desolvated ions attribute to the IM spectrometer signal. This finding demonstrates the utility of IM spectrometry for the characterization of ESI, since in contrast to the capillary-plate setup, the release of ions from the electrospray droplets can be observed. The I-U curves change significantly with pressure. An important result is the reduction of the maximum current with decreasing pressure. The connected loss of ionization efficiency can be compensated by a more efficient transfer of ions in the IM spectrometer at increased E/N. Thus, similar limits of detection could be obtained at 500 mbar and 1 bar.
Measures for interoperability of phenotypic data: minimum information requirements and formatting
(2016)
Background: Plant phenotypic data shrouds a wealth of information which, when accurately analysed and linked to other data types, brings to light the knowledge about the mechanisms of life. As phenotyping is a field of research comprising manifold, diverse and time-consuming experiments, the findings can be fostered by reusing and combining existing datasets. Their correct interpretation, and thus replicability, comparability and interoperability, is possible provided that the collected observations are equipped with an adequate set of metadata. So far there have been no common standards governing phenotypic data description, which hampered data exchange and reuse. Results: In this paper we propose the guidelines for proper handling of the information about plant phenotyping experiments, in terms of both the recommended content of the description and its formatting. We provide a document called "Minimum Information About a Plant Phenotyping Experiment", which specifies what information about each experiment should be given, and a Phenotyping Configuration for the ISA-Tab format, which allows to practically organise this information within a dataset. We provide examples of ISA-Tab-formatted phenotypic data, and a general description of a few systems where the recommendations have been implemented. Conclusions: Acceptance of the rules described in this paper by the plant phenotyping community will help to achieve findable, accessible, interoperable and reusable data.
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
We develop a method of finding analytical sotutions of the Bogolyubov-de Gennes equations for the excitations of a Bose condensate in the Thomas-Fermi regime in harmonic traps of any asymmetry and introduce a classification of eigenstates. In the case of cylindrical symmetry we emphasize the presence of an accidental degeneracy in the excitation spectrum at certain values of the projection of orbital angular momentum on the symmetry axis and discuss possible consequences of the degeneracy in the context of new signatures of Bose- Einstein condensation
We present the fabrication of TiO2 nanotube electrodes with high biocompatibility and extraordinary spectroscopic properties. Intense surface-enhanced resonance Raman signals of the heme unit of the redox enzyme Cytochromeb(5) were observed upon covalent immobilization of the protein matrix on the TiO2 surface, revealing overall preserved structural integrity and redox behavior. The enhancement factor could be rationally controlled by varying the electrode annealing temperature, reaching a record maximum value of over 70 at 475 degrees C. For the first time, such high values are reported for non-directly surface-interacting probes, for which the involvement of charge-transfer processes in signal amplification can be excluded. The origin of the surface enhancement is exclusively attributed to enhanced localized electric fields resulting from the specific optical properties of the nanotubular geometry of the electrode.
In this study, the spatial and temporal impacts of the Ataturk Dam on agro-meteorological aspects of the Southeastern Anatolia region have been investigated. Change detection and environmental impacts due to water-reserve changes in Ataturk Dam Lake have been determined and evaluated using multi-temporal Landsat satellite imageries and meteorological datasets within a period of 1984-2011. These time series have been evaluated for three time periods. Dam construction period constitutes the first part of the study. Land cover/use changes especially on agricultural fields under the Ataturk Dam Lake and its vicinity have been identified between the periods of 1984-1992. The second period comprises the 10-year period after the completion of filling up the reservoir in 1992. At this period, Landsat and meteorological time-series analyses are examined to assess the impact of the Ataturk Dam Lake on selected irrigated agricultural areas. For the last 9-year period from 2002 to 2011, the relationships between seasonal water-reserve changes and irrigated plains under changing climatic factors primarily driving vegetation activity (monthly, seasonal, and annual fluctuations of rainfall rate, air temperature, humidity) on the watershed have been investigated using a 30-year meteorological time series. The results showed that approximately 368 km(2) of agricultural fields have been affected because of inundation due to the Ataturk Dam Lake. However, irrigated agricultural fields have been increased by 56.3% of the total area (1552 of 2756 km(2)) on Harran Plain within the period of 1984-2011.
Because of political conflicts and climate change, migration will be increased worldwide and integration in host societies is a challenge also for migrants. We hypothesize that migrants, who take up the challenge in a new social environment are taller than migrants who do not pose this challenge. We analyze by a questionnaire possible social, nutritional and ethnic influencing factors to body height (BH) of adult offspring of Turkish migrants (n = 82, 39 males) aged from 18 to 34 years (mean age 24.6 years). The results of multiple regression (downward selection) show that the more a male adult offspring of Turkish migrants feels like belonging to the Turkish culture, the smaller he is (95% CI, -3.79, -0.323). Further, the more a male adult offspring of Turkish migrants feels like belonging to the German culture, the taller he is (95% CI, -0.152, 1.738). We discussed it comparable to primates taking up their challenge in dominance, where as a result their body size increase is associated with higher IGF-1 level. IGF-1 is associated with emotional belonging and has a fundamental role in the regulation of metabolism and growth of the human body. With all pilot characteristics of our study results show that the successful challenge of integration in a new society is strongly associated with the emotional integration and identification in the sense of a personal sense of belonging to society. We discuss taller BH as a signal of social growth adjustment. In this sense, a secular trend of BH adaptation of migrants to hosts is a sign of integration.