Refine
Has Fulltext
- no (45362) (remove)
Year of publication
Document Type
- Article (32335)
- Monograph/Edited Volume (4612)
- Doctoral Thesis (3988)
- Review (1744)
- Part of a Book (1065)
- Other (911)
- Conference Proceeding (338)
- Preprint (123)
- Habilitation Thesis (55)
- Contribution to a Periodical (39)
- Part of Periodical (39)
- Journal/Publication series (34)
- Course Material (22)
- Working Paper (21)
- Report (18)
- Postprint (11)
- Moving Images (6)
- Sound (1)
Language
Keywords
- climate change (103)
- Germany (91)
- stars: massive (59)
- diffusion (51)
- Arabidopsis thaliana (50)
- Climate change (49)
- stars: early-type (48)
- gamma rays: general (47)
- COVID-19 (46)
- German (46)
Institute
- Institut für Biochemie und Biologie (4844)
- Institut für Physik und Astronomie (4797)
- Institut für Geowissenschaften (3216)
- Institut für Chemie (2952)
- Historisches Institut (2087)
- Wirtschaftswissenschaften (2064)
- Department Psychologie (2063)
- Sozialwissenschaften (1655)
- Institut für Mathematik (1645)
- Institut für Romanistik (1601)
In this paper, we move from the large strand of research that looks at evidence of climate migration to the questions: who are the climate migrants? and where do they go? These questions are crucial to design policies that mitigate welfare losses of migration choices due to climate change. We study the direct and heterogeneous associations between weather extremes and migration in rural India. We combine ERAS reanalysis data with the India Human Development Survey household panel and conduct regression analyses by applying linear probability and multinomial logit models. This enables us to establish a causal relationship between temperature and precipitation anomalies and overall migration as well as migration by destination. We show that adverse weather shocks decrease rural-rural and international migration and push people into cities in different, presumably more prosperous states. A series of positive weather shocks, however, facilitates international migration and migration to cities within the same state. Further, our results indicate that in contrast to other migrants, climate migrants are likely to be from the lower end of the skill distribution and from households strongly dependent on agricultural production. We estimate that approximately 8% of all rural-urban moves between 2005 and 2012 can be attributed to weather. This figure might increase as a consequence of climate change. Thus, a key policy recommendation is to take steps to facilitate integration of less educated migrants into the urban labor market.
We show that the codifference is a useful tool in studying the ergodicity breaking and non-Gaussianity properties of stochastic time series. While the codifference is a measure of dependence that was previously studied mainly in the context of stable processes, we here extend its range of applicability to random-parameter and diffusing-diffusivity models which are important in contemporary physics, biology and financial engineering. We prove that the codifference detects forms of dependence and ergodicity breaking which are not visible from analysing the covariance and correlation functions. We also discuss a related measure of dispersion, which is a nonlinear analogue of the mean squared displacement.
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Many studies on biological and soft matter systems report the joint presence of a linear mean-squared displacement and a non-Gaussian probability density exhibiting, for instance, exponential or stretched-Gaussian tails. This phenomenon is ascribed to the heterogeneity of the medium and is captured by random parameter models such as ‘superstatistics’ or ‘diffusing diffusivity’. Independently, scientists working in the area of time series analysis and statistics have studied a class of discrete-time processes with similar properties, namely, random coefficient autoregressive models. In this work we try to reconcile these two approaches and thus provide a bridge between physical stochastic processes and autoregressive models.Westart from the basic Langevin equation of motion with time-varying damping or diffusion coefficients and establish the link to random coefficient autoregressive processes. By exploring that link we gain access to efficient statistical methods which can help to identify data exhibiting Brownian yet non-Gaussian diffusion.
The Proteasome Acts as a Hub for Plant Immunity and Is Targeted by Pseudomonas Type III Effectors
(2016)
Recent evidence suggests that the ubiquitin-proteasome system is involved in several aspects of plant immunity and that a range of plant pathogens subvert the ubiquitin-proteasome system to enhance their virulence. Here, we show that proteasome activity is strongly induced during basal defense in Arabidopsis (Arabidopsis thaliana). Mutant lines of the proteasome subunits RPT2a and RPN12a support increased bacterial growth of virulent Pseudomonas syringae pv tomato DC3000 (Pst) and Pseudomonas syringae pv maculicola ES4326. Both proteasome subunits are required for pathogen-associated molecular pattern-triggered immunity responses. Analysis of bacterial growth after a secondary infection of systemic leaves revealed that the establishment of systemic acquired resistance (SAR) is impaired in proteasome mutants, suggesting that the proteasome also plays an important role in defense priming and SAR. In addition, we show that Pst inhibits proteasome activity in a type III secretion-dependent manner. A screen for type III effector proteins from Pst for their ability to interfere with proteasome activity revealed HopM1, HopAO1, HopA1, and HopG1 as putative proteasome inhibitors. Biochemical characterization of HopM1 by mass spectrometry indicates that HopM1 interacts with several E3 ubiquitin ligases and proteasome subunits. This supports the hypothesis that HopM1 associates with the proteasome, leading to its inhibition. Thus, the proteasome is an essential component of pathogen-associated molecular pattern-triggered immunity and SAR, which is targeted by multiple bacterial effectors.
XopJ is a Xanthomonas type III effector protein that promotes bacterial virulence on susceptible pepper plants through the inhibition of the host cell proteasome and a resultant suppression of salicylic acid (SA) - dependent defense responses. We show here that Nicotiana benthamiana leaves transiently expressing XopJ display hypersensitive response (HR) -like symptoms when exogenously treated with SA. This apparent avirulence function of XopJ was further dependent on effector myristoylation as well as on an intact catalytic triad, suggesting a requirement of its enzymatic activity for HR-like symptom elicitation. The ability of XopJ to cause a HR-like symptom development upon SA treatment was lost upon silencing of SGT1 and NDR1, respectively, but was independent of EDS1 silencing, suggesting that XopJ is recognized by an R protein of the CC-NBS-LRR class. Furthermore, silencing of NPR1 abolished the elicitation of HR-like symptoms in XopJ expressing leaves after SA application. Measurement of the proteasome activity indicated that proteasome inhibition by XopJ was alleviated in the presence of SA, an effect that was not observed in NPR1 silenced plants. Our results suggest that XopJ - triggered HR-like symptoms are closely related to the virulence function of the effector and that XopJ follows a two-signal model in order to elicit a response in the non-host plant N. benthamiana.
Quantitative estimates of sea-level rise in the Mediterranean Basin become increasingly accurate thanks to detailed satellite monitoring. However, such measuring campaigns cover several years to decades, while longer-term sea-level records are rare for the Mediterranean. We used a data archeological approach to reanalyze monthly mean sea-level data of the Antalya-I (1935–1977) tide gauge to fill this gap. We checked the accuracy and reliability of these data before merging them with the more recent records of the Antalya-II (1985–2009) tide gauge, accounting for an eight-year hiatus. We obtain a composite time series of monthly and annual mean sea levels spanning some 75 years, providing the longest record for the eastern Mediterranean Basin, and thus an essential tool for studying the region's recent sea-level trends. We estimate a relative mean sea-level rise of 2.2 ± 0.5 mm/year between 1935 and 2008, with an annual variability (expressed here as the standard deviation of the residuals, σresiduals = 41.4 mm) above that at the closest tide gauges (e.g., Thessaloniki, Greece, σresiduals = 29.0 mm). Relative sea-level rise accelerated to 6.0 ± 1.5 mm/year at Antalya-II; we attribute roughly half of this rate (~3.6 mm/year) to tectonic crustal motion and anthropogenic land subsidence. Our study highlights the value of data archeology for recovering and integrating historic tide gauge data for long-term sea-level and climate studies.
Turkish is a language described as relying predominantly on non-finite subordination in the domain of clause combining. However, there are also strategies of finite subordination, as well as means of syndetic and asyndetic paratactic clause combining, especially in the informal settings.
Clause combining is and has been one of the focal points of research on heritage Turkish (h-Turkish).
One point is particularly clear: In comparison with the monolingual setting, finite means of clause combining are more frequent in h-Turkish in Germany, the U.S., and the Netherlands, while non-finite means of clause combining are less frequent.
Overall, our results confirm the findings of earlier studies: heritage speakers in Germany and the U.S. prefer paratactic means of clause combining using connectors, as opposed to monolingual speakers.
Our results also reveal that age (adolescents vs. adults) and register (informal vs. formal) significantly modulate the use of connectors.
Moreover, we find that the shift in preferences in means of clause combining triggers an expansion in the system of connectors and leads to the development of new narrative connectors, such as o zaman and derken.
The system of syndetic paratactic clause combining is expanding in heritage Turkish. This expansion calls for multifaceted modeling of change in heritage languages, which integrates language-internal factors (register), dynamics of convergence with the contact languages, and extra-linguistic factors (age and language use).
Reading requires the assembly of cognitive processes across a wide spectrum from low-level visual perception to high-level discourse comprehension. One approach of unravelling the dynamics associated with these processes is to determine how eye movements are influenced by the characteristics of the text, in particular which features of the words within the perceptual span maximise the information intake due to foveal, spillover, parafoveal, and predictive processing. One way to test the generalisability of current proposals of such distributed processing is to examine them across different languages. For Turkish, an agglutinative language with a shallow orthography-phonology mapping, we replicate the well-known canonical main effects of frequency and predictability of the fixated word as well as effects of incoming saccade amplitude and fixation location within the word on single-fixation durations with data from 35 adults reading 120 nine-word sentences. Evidence for previously reported effects of the characteristics of neighbouring words and interactions was mixed. There was no evidence for the expected Turkish-specific morphological effect of the number of inflectional suffixes on single-fixation durations. To control for word-selection bias associated with single-fixation durations, we also tested effects on word skipping, single-fixation, and multiple-fixation cases with a base-line category logit model, assuming an increase of difficulty for an increase in the number of fixations. With this model, significant effects of word characteristics and number of inflectional suffixes of foveal word on probabilities of the number of fixations were observed, while the effects of the characteristics of neighbouring words and interactions were mixed.
Because of political conflicts and climate change, migration will be increased worldwide and integration in host societies is a challenge also for migrants. We hypothesize that migrants, who take up the challenge in a new social environment are taller than migrants who do not pose this challenge. We analyze by a questionnaire possible social, nutritional and ethnic influencing factors to body height (BH) of adult offspring of Turkish migrants (n = 82, 39 males) aged from 18 to 34 years (mean age 24.6 years). The results of multiple regression (downward selection) show that the more a male adult offspring of Turkish migrants feels like belonging to the Turkish culture, the smaller he is (95% CI, -3.79, -0.323). Further, the more a male adult offspring of Turkish migrants feels like belonging to the German culture, the taller he is (95% CI, -0.152, 1.738). We discussed it comparable to primates taking up their challenge in dominance, where as a result their body size increase is associated with higher IGF-1 level. IGF-1 is associated with emotional belonging and has a fundamental role in the regulation of metabolism and growth of the human body. With all pilot characteristics of our study results show that the successful challenge of integration in a new society is strongly associated with the emotional integration and identification in the sense of a personal sense of belonging to society. We discuss taller BH as a signal of social growth adjustment. In this sense, a secular trend of BH adaptation of migrants to hosts is a sign of integration.
In this study, the spatial and temporal impacts of the Ataturk Dam on agro-meteorological aspects of the Southeastern Anatolia region have been investigated. Change detection and environmental impacts due to water-reserve changes in Ataturk Dam Lake have been determined and evaluated using multi-temporal Landsat satellite imageries and meteorological datasets within a period of 1984-2011. These time series have been evaluated for three time periods. Dam construction period constitutes the first part of the study. Land cover/use changes especially on agricultural fields under the Ataturk Dam Lake and its vicinity have been identified between the periods of 1984-1992. The second period comprises the 10-year period after the completion of filling up the reservoir in 1992. At this period, Landsat and meteorological time-series analyses are examined to assess the impact of the Ataturk Dam Lake on selected irrigated agricultural areas. For the last 9-year period from 2002 to 2011, the relationships between seasonal water-reserve changes and irrigated plains under changing climatic factors primarily driving vegetation activity (monthly, seasonal, and annual fluctuations of rainfall rate, air temperature, humidity) on the watershed have been investigated using a 30-year meteorological time series. The results showed that approximately 368 km(2) of agricultural fields have been affected because of inundation due to the Ataturk Dam Lake. However, irrigated agricultural fields have been increased by 56.3% of the total area (1552 of 2756 km(2)) on Harran Plain within the period of 1984-2011.
Unser modernes Strafrechtsbild ist geprägt von dem Gedanken, dass das Strafrecht den Bürgern ein freies und friedliches Zusammenleben unter Gewährleistung aller verfassungsrechtlich garantierten Grundrechte sichert. Der Einsatz des Strafrechts bedarf der Legitimation und darf nicht aus moralischen Vorstellungen oder Gedanken abgeleitet werden. Strafbar kann es demnach nicht sein, wenn ein Rechtsgutsträger über ein ihm disponibles Rechtsgut frei verfügt. Oftmals stehen hierbei der strafrechtliche Lebensschutz und das Selbstbestimmungsrecht des Einzelnen in einem Spannungsverhältnis.
Die vorliegende Arbeit hat untersucht, wie sich dieses Spannungsverhältnis in der höchstrichterlichen Judikatur entwickelt hat und wie es nunmehr gelöst wird. Konkret stellt sich hierbei die Frage, wie es strafrechtlich bewertet wurde, wenn jemand einen Tötungserfolg mitverursacht hat, der zugleich auf einem freiverantwortlichen Willensentschluss des Opfers beruhte.
We present the fabrication of TiO2 nanotube electrodes with high biocompatibility and extraordinary spectroscopic properties. Intense surface-enhanced resonance Raman signals of the heme unit of the redox enzyme Cytochromeb(5) were observed upon covalent immobilization of the protein matrix on the TiO2 surface, revealing overall preserved structural integrity and redox behavior. The enhancement factor could be rationally controlled by varying the electrode annealing temperature, reaching a record maximum value of over 70 at 475 degrees C. For the first time, such high values are reported for non-directly surface-interacting probes, for which the involvement of charge-transfer processes in signal amplification can be excluded. The origin of the surface enhancement is exclusively attributed to enhanced localized electric fields resulting from the specific optical properties of the nanotubular geometry of the electrode.
We develop a method of finding analytical sotutions of the Bogolyubov-de Gennes equations for the excitations of a Bose condensate in the Thomas-Fermi regime in harmonic traps of any asymmetry and introduce a classification of eigenstates. In the case of cylindrical symmetry we emphasize the presence of an accidental degeneracy in the excitation spectrum at certain values of the projection of orbital angular momentum on the symmetry axis and discuss possible consequences of the degeneracy in the context of new signatures of Bose- Einstein condensation
Heutzutage ist es üblich, die Ehre als einen obsoleten Begriff zu betrachten, der nur einem archaischen Denkmodell zuzuordnen ist und keine handlungsprägende Größe in der Gegenwartsgesellschaft darstellt. Die Ehrenmorde, die heute noch in unterschiedlichen Teilen der Welt verübt werden, scheinen diese Behauptung zu bestätigen. In diesem Buch wird jedoch die These vertreten, dass nicht der Ehrbegriff, sondern seine Deutungen archaischer Natur und daher in Frage zu stellen sind. Die Ehre ist die Bezeichnung des sozialen Werts eines Menschen, den er infolge seiner achtenswerten Handlungen erlangt. Also kann sie kein Motiv für moralisch fragwürdige Praktiken bilden. Vor diesem Hintergrund werden die Formen und die Voraussetzungen der Ehre dargestellt, die sowohl in Bezug auf unsere Zeit anpassungsfähig als auch ethisch tragbar sind.
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
Literarische Grammatik
(2023)
Dieser Band versammelt neun Beiträge mit dem Ziel, Sprach- und Literaturwissenschaft aufeinander zu beziehen: Literatur grammatisch zu betrachten und Grammatik für Literatur (neu) zu denken. Jeder Beitrag nimmt mindestens einen grammatischen und einen literarischen Gegenstand zum Ausgangspunkt. Dabei ist die Bandbreite groß; sie reicht von Bodo Kirchhoffs Roman ‚Dämmer und Aufruhr‘ über die Kurzgeschichte ‚Das Brot‘ von Wolfgang Borchert bis hin zu Marion Poschmanns Gedichtzyklus ‚Kindergarten Lichtenberg‘ und deckt unterschiedlichste sprachliche Bereiche wie Tempus, semantische Rollen, Interpunktionszeichen oder Metaphern ab.
Ist es in der Schule geradezu erwünscht, Grammatik und Literatur integrativ zu unterrichten, verfolgen sie als universitäre Disziplinen oft ganz unterschiedliche Fragestellungen an verschiedenen Sprachwerken. Vor diesem Hintergrund ist dieser Band ein interdisziplinärer Versuch, Anregungen und neue Perspektiven für schulische wie universitäre Bildungskontexte zu geben.
Measures for interoperability of phenotypic data: minimum information requirements and formatting
(2016)
Background: Plant phenotypic data shrouds a wealth of information which, when accurately analysed and linked to other data types, brings to light the knowledge about the mechanisms of life. As phenotyping is a field of research comprising manifold, diverse and time-consuming experiments, the findings can be fostered by reusing and combining existing datasets. Their correct interpretation, and thus replicability, comparability and interoperability, is possible provided that the collected observations are equipped with an adequate set of metadata. So far there have been no common standards governing phenotypic data description, which hampered data exchange and reuse. Results: In this paper we propose the guidelines for proper handling of the information about plant phenotyping experiments, in terms of both the recommended content of the description and its formatting. We provide a document called "Minimum Information About a Plant Phenotyping Experiment", which specifies what information about each experiment should be given, and a Phenotyping Configuration for the ISA-Tab format, which allows to practically organise this information within a dataset. We provide examples of ISA-Tab-formatted phenotypic data, and a general description of a few systems where the recommendations have been implemented. Conclusions: Acceptance of the rules described in this paper by the plant phenotyping community will help to achieve findable, accessible, interoperable and reusable data.
The pressure dependence of sheath gas assisted electrospray ionization (ESI) was investigated based on two complementary experimental setups, namely an ESI-ion mobility (IM) spectrometer and an ESI capillary - Faraday plate setup housed in an optically accessible vacuum chamber. The ESI-IM spectrometer is capable of working in the pressure range between 300 and 1000 mbar. Another aim was the assessment of the analytical capabilities of a subambient pressure ESI-IM spectrometer. The pressure dependence of ESI was characterized by imaging the electrospray and recording current-voltage (I-U) curves. Qualitatively different behavior was observed in both setups. While the current rises continuously with the voltage in the capillary-plate setup, a sharp increase of the current was measured in the IM spectrometer above a pressure-dependent threshold voltage. The different character can be attributed to the detection of different species in both experiments. In the capillary-plate experiment, a multitude of charged species are detected while only desolvated ions attribute to the IM spectrometer signal. This finding demonstrates the utility of IM spectrometry for the characterization of ESI, since in contrast to the capillary-plate setup, the release of ions from the electrospray droplets can be observed. The I-U curves change significantly with pressure. An important result is the reduction of the maximum current with decreasing pressure. The connected loss of ionization efficiency can be compensated by a more efficient transfer of ions in the IM spectrometer at increased E/N. Thus, similar limits of detection could be obtained at 500 mbar and 1 bar.
The capability of electrospray ionization (ESI)-ion mobility (IM) spectrometry for reaction monitoring is assessed both as a stand-alone real-time technique and in combination with HPLC. A three-step chemical reaction, consisting of a Williamson ether synthesis followed by a hydrogenation and an N-alkylation step, is chosen for demonstration. Intermediates and products are determined with a drift time to mass-per-charge correlation. Addition of an HPLC column to the setup increases the separation power and allows the determination of further species. Monitoring of the intensities of the various species over the reaction time allows the detection of the end of reaction, determination of the rate-limiting step, observation of the system response in discontinuous processes, and optimization of the mass ratios of the starting materials. However, charge competition in ESI influences the quantitative detection of substances in the reaction mixture. Therefore, two different methods are investigated, which allow the quantification and investigation of reaction kinetics. The first method is based on the pre-separation of the compounds on an HPLC column and their subsequent individual detection in the ESI-IM spectrometer. The second method involves an extended calibration procedure, which considers charge competition effects and facilitates nearly real-time quantification.
The application of electrospray ionization (ESI) ion mobility (IM) spectrometry on the detection end of a high-performance liquid chromatograph has been a subject of study for some time. So far, this method has been limited to low flow rates or has required splitting of the liquid flow. This work presents a novel concept of an ESI source facilitating the stable operation of the spectrometer at flow rates between 10 mu L min(-1) and 1500 mu L min(-1) without flow splitting, advancing the T-cylinder design developed by Kurnin and co-workers. Flow rates eight times faster than previously reported were achieved because of a more efficient dispersion of the liquid at increased electrospray voltages combined with nebulization by a sheath gas. Imaging revealed the spray operation to be in a rotationally symmetric multijet-mode. The novel ESI-IM spectrometer tolerates high water contents (<= 90%) and electrolyte concentrations up to 10 mM, meeting another condition required of high-performance liquid chromatography (HPLC) detectors. Limits of detection of 50 nM for promazine in the positive mode and 1 mu M for 1,3-dinitrobenzene in the negative mode were established. Three mixtures of reduced complexity (five surfactants, four neuroleptics, and two isomers) were separated in the millisecond regime in stand-alone operation of the spectrometer. Separations of two more complex mixtures (five neuroleptics and 13 pesticides) demonstrate the application of the spectrometer as an HPLC detector. The examples illustrate the advantages of the spectrometer over the established diode array detector, in terms of additional IM separation of substances not fully separated in the retention time domain as well as identification of substances based on their characteristic IMs.
The combination of high-performance liquid chromatography and electrospray ionization ion mobility spectrometry facilitates the two-dimensional separation of complex mixtures in the retention and drift time plane. The ion mobility spectrometer presented here was optimized for flow rates customarily used in high-performance liquid chromatography between 100 and 1500 mu L/min. The characterization of the system with respect to such parameters as the peak capacity of each time dimension and of the 2D spectrum was carried out based on a separation of a pesticide mixture containing 24 substances. While the total ion current chromatogram is coarsely resolved, exhibiting coelutions for a number of compounds, all substances can be separately detected in the 2D plane due to the orthogonality of the separations in retention and drift dimensions. Another major advantage of the ion mobility detector is the identification of substances based on their characteristic mobilities. Electrospray ionization allows the detection of substances lacking a chromophore. As an example, the separation of a mixture of 18 amino acids is presented. A software built upon the free mass spectrometry package OpenMS was developed for processing the extensive 2D data. The different processing steps are implemented as separate modules which can be arranged in a graphic workflow facilitating automated processing of data.
The increasing development of antibiotic resistance in bacteria has been a major problem for years, both in human and veterinary medicine. Prophylactic measures, such as the use of vaccines, are of great importance in reducing the use of antibiotics in livestock. These vaccines are mainly produced based on formaldehyde inactivation. However, the latter damages the recognition elements of the bacterial proteins and thus could reduce the immune response in the animal. An alternative inactivation method developed in this work is based on gentle photodynamic inactivation using carbon nanodots (CNDs) at excitation wavelengths λex > 290 nm. The photodynamic inactivation was characterized on the nonvirulent laboratory strain Escherichia coli K12 using synthesized CNDs. For a gentle inactivation, the CNDs must be absorbed into the cytoplasm of the E. coli cell. Thus, the inactivation through photoinduced formation of reactive oxygen species only takes place inside the bacterium, which means that the outer membrane is neither damaged nor altered. The loading of the CNDs into E. coli was examined using fluorescence microscopy. Complete loading of the bacterial cells could be achieved in less than 10 min. These studies revealed a reversible uptake process allowing the recovery and reuse of the CNDs after irradiation and before the administration of the vaccine. The success of photodynamic inactivation was verified by viability assays on agar. In a homemade flow photoreactor, the fastest successful irradiation of the bacteria could be carried out in 34 s. Therefore, the photodynamic inactivation based on CNDs is very effective. The membrane integrity of the bacteria after irradiation was verified by slide agglutination and atomic force microscopy. The method developed for the laboratory strain E. coli K12 could then be successfully applied to the important avian pathogens Bordetella avium and Ornithobacterium rhinotracheale to aid the development of novel vaccines.
The knowledge of the largest expected earthquake magnitude in a region is one of the key issues in probabilistic seismic hazard calculations and the estimation of worst-case scenarios. Earthquake catalogues are the most informative source of information for the inference of earthquake magnitudes. We analysed the earthquake catalogue for Central Asia with respect to the largest expected magnitudes m(T) in a pre-defined time horizon T-f using a recently developed statistical methodology, extended by the explicit probabilistic consideration of magnitude errors. For this aim, we assumed broad error distributions for historical events, whereas the magnitudes of recently recorded instrumental earthquakes had smaller errors. The results indicate high probabilities for the occurrence of large events (M >= 8), even in short time intervals of a few decades. The expected magnitudes relative to the assumed maximum possible magnitude are generally higher for intermediate-depth earthquakes (51-300 km) than for shallow events (0-50 km). For long future time horizons, for example, a few hundred years, earthquakes with M >= 8.5 have to be taken into account, although, apart from the 1889 Chilik earthquake, it is probable that no such event occurred during the observation period of the catalogue.
Earthquake catalogs are probably the most informative data source about spatiotemporal seismicity evolution. The catalog quality in one of the most active seismogenic zones in the world, Japan, is excellent, although changes in quality arising, for example, from an evolving network are clearly present. Here, we seek the best estimate for the largest expected earthquake in a given future time interval from a combination of historic and instrumental earthquake catalogs. We extend the technique introduced by Zoller et al. (2013) to estimate the maximum magnitude in a time window of length T-f for earthquake catalogs with varying level of completeness. In particular, we consider the case in which two types of catalogs are available: a historic catalog and an instrumental catalog. This leads to competing interests with respect to the estimation of the two parameters from the Gutenberg-Richter law, the b-value and the event rate lambda above a given lower-magnitude threshold (the a-value). The b-value is estimated most precisely from the frequently occurring small earthquakes; however, the tendency of small events to cluster in aftershocks, swarms, etc. violates the assumption of a Poisson process that is used for the estimation of lambda. We suggest addressing conflict by estimating b solely from instrumental seismicity and using large magnitude events from historic catalogs for the earthquake rate estimation. Applying the method to Japan, there is a probability of about 20% that the maximum expected magnitude during any future time interval of length T-f = 30 years is m >= 9.0. Studies of different subregions in Japan indicates high probabilities for M 8 earthquakes along the Tohoku arc and relatively low probabilities in the Tokai, Tonankai, and Nankai region. Finally, for scenarios related to long-time horizons and high-confidence levels, the maximum expected magnitude will be around 10.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
We present a Bayesian method that allows continuous updating the aperiodicity of the recurrence time distribution of large earthquakes based on a catalog with magnitudes above a completeness threshold. The approach uses a recently proposed renewal model for seismicity and allows the inclusion of magnitude uncertainties in a straightforward manner. Errors accounting for grouped magnitudes and random errors are studied and discussed. The results indicate that a stable and realistic value of the aperiodicity can be predicted in an early state of seismicity evolution, even though only a small number of large earthquakes has occurred to date. Furthermore, we demonstrate that magnitude uncertainties can drastically influence the results and can therefore not be neglected. We show how to correct for the bias caused by magnitude errors. For the region of Parkfield we find that the aperiodicity, or the coefficient of variation, is clearly higher than in studies which are solely based on the large earthquakes.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
Kijko et al. (2016) present various methods to estimate parameters that are relevant for probabilistic seismic-hazard assessment. One of these parameters, although not the most influential, is the maximum possible earthquake magnitude m(max). I show that the proposed estimation of m(max) is based on an erroneous equation related to a misuse of the estimator in Cooke (1979) and leads to unstable results. So far, reported finite estimations of m(max) arise from data selection, because the estimator in Kijko et al. (2016) diverges with finite probability. This finding is independent of the assumed distribution of earthquake magnitudes. For the specific choice of the doubly truncated Gutenberg-Richter distribution, I illustrate the problems by deriving explicit equations. Finally, I conclude that point estimators are generally not a suitable approach to constrain m(max).
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
Convergence of the frequency-magnitude distribution of global earthquakes - maybe in 200 years
(2013)
I study the ability to estimate the tail of the frequency-magnitude distribution of global earthquakes. While power-law scaling for small earthquakes is accepted by support of data, the tail remains speculative. In a recent study, Bell et al. (2013) claim that the frequency-magnitude distribution of global earthquakes converges to a tapered Pareto distribution. I show that this finding results from data fitting errors, namely from the biased maximum likelihood estimation of the corner magnitude theta in strongly undersampled models. In particular, the estimation of theta depends solely on the few largest events in the catalog. Taking this into account, I compare various state-of-the-art models for the global frequency-magnitude distribution. After discarding undersampled models, the remaining ones, including the unbounded Gutenberg-Richter distribution, perform all equally well and are, therefore, indistinguishable. Convergence to a specific distribution, if it ever takes place, requires about 200 years homogeneous recording of global seismicity, at least.
The paper studies catalytic super-Brownian motion on the real line, where the branching rate is controlled by a catalyst. D. A. Dawson, K. Fleischmann and S. Roelly showed, for a broad class of catalysts, that, as for constant branching, the processes are absolutely continuous measures. This paper considers a class of catalysts, called moderate, which must satisfy a uniform boundedness condition and a condition controlling the degree of singularity---essentially that the mass of catalyst in small balls should (uniformly) be of order r^a, where a>0. The main result of this paper shows that for this class of catalysts there is a continuous density field for the process. Moreover the density is the unique solution (in law) of an appropriate SPDE.
The author considers the heat equation in dimension one with singular drift and inhomogeneous space-time white noise. In particular, the quadratic variation measure of the white noise is not required to be absolutely continuous w.r.t. the Lebesgue measure, neither in space nor in time. Under some assumptions the author gives statements on strong and weak existence as well as strong and weak uniqueness of continuous solutions.
Hysteresis in the pinning-depinning transitions of spiral waves rotating around a hole in a circular shaped two- dimensional excitable medium is studied both by use of the continuation software AUTO and by direct numerical integration of the reaction-diffusion equations for the FitzHugh-Nagumo model. In order to clarify the role of different factors in this phenomenon, a kinematical description is applied. It is found that the hysteresis phenomenon computed for the reaction-diffusion model can be reproduced qualitatively only when a nonlinear eikonal equation (i.e. velocity- curvature relationship) is assumed. However, to obtain quantitative agreement, the dispersion relation has to be taken into account.
Sub-seasonal thaw slump mass wasting is not consistently energy limited at the landscape scale
(2018)
Predicting future thaw slump activity requires a sound understanding of the atmospheric drivers and geomorphic controls on mass wasting across a range of timescales. On sub-seasonal timescales, sparse measurements indicate that mass wasting at active slumps is often limited by the energy available for melting ground ice, but other factors such as rainfall or the formation of an insulating veneer may also be relevant. To study the sub-seasonal drivers, we derive topographic changes from single-pass radar interferometric data acquired by the TanDEM-X satellites. The estimated elevation changes at 12m resolution complement the commonly observed planimetric retreat rates by providing information on volume losses. Their high vertical precision (around 30 cm), frequent observations (11 days) and large coverage (5000 km(2)) allow us to track mass wasting as drivers such as the available energy change during the summer of 2015 in two study regions. We find that thaw slumps in the Tuktoyaktuk coastlands, Canada, are not energy limited in June, as they undergo limited mass wasting (height loss of around 0 cm day 1) despite the ample available energy, suggesting the widespread presence of early season insulating snow or debris veneer. Later in summer, height losses generally increase (around 3 cm day 1), but they do so in distinct ways. For many slumps, mass wasting tracks the available energy, a temporal pattern that is also observed at coastal yedoma cliffs on the Bykovsky Peninsula, Russia. However, the other two common temporal trajectories are asynchronous with the available energy, as they track strong precipitation events or show a sudden speed-up in late August respectively. The observed temporal patterns are poorly related to slump characteristics like the headwall height. The contrasting temporal behaviour of nearby thaw slumps highlights the importance of complex local and temporally varying controls on mass wasting.
Necrotrophic as well as saprophytic small-spored Altemaria (A.) species are annually responsible for major losses of agricultural products, such as cereal crops, associated with the contamination of food and feedstuff with potential health-endangering Altemaria toxins. Knowledge of the metabolic capabilities of different species-groups to form mycotoxins is of importance for a reliable risk assessment. 93 Altemaria strains belonging to the four species groups Alternaria tenuissima, A. arborescens, A. altemata, and A. infectoria were isolated from winter wheat kernels harvested from fields in Germany and Russia and incubated under equal conditions. Chemical analysis by means of an HPLC-MS/MS multi-Alternaria-toxin-method showed that 95% of all strains were able to form at least one of the targeted 17 non-host specific Altemaria toxins. Simultaneous production of up to 15 (modified) Altemaria toxins by members of the A. tenuissima, A. arborescens, A. altemata species-groups and up to seven toxins by A. infectoria strains was demonstrated. Overall tenuazonic acid was the most extensively formed mycotoxin followed by alternariol and alternariol mono methylether, whereas altertoxin I was the most frequently detected toxin. Sulfoconjugated modifications of alternariol, alternariol mono methylether, altenuisol and altenuene were frequently determined. Unknown perylene quinone derivatives were additionally detected. Strains of the species-group A. infectoria could be segregated from strains of the other three species-groups due to significantly lower toxin levels and the specific production of infectopyrone. Apart from infectopyrone, alterperylenol was also frequently produced by 95% of the A. infectoria strains. Neither by the concentration nor by the composition of the targeted Altemaria toxins a differentiation between the species-groups A. altemata, A. tenuissima and A. arborescens was possible.
Alternaria (A.) is a genus of widespread fungi capable of producing numerous, possibly health-endangering Alternaria toxins (ATs), which are usually not the focus of attention. The formation of ATs depends on the species and complex interactions of various environmental factors and is not fully understood. In this study the influence of temperature (7 degrees C, 25 degrees C), substrate (rice, wheat kernels) and incubation time (4, 7, and 14 days) on the production of thirteen ATs and three sulfoconjugated ATs by three different Alternaria isolates from the species groups A. tenuissima and A. infectoria was determined. High-performance liquid chromatography coupled with tandem mass spectrometry was used for quantification. Under nearly all conditions, tenuazonic acid was the most extensively produced toxin. At 25 degrees C and with increasing incubation time all toxins were formed in high amounts by the two A. tenuissima strains on both substrates with comparable mycotoxin profiles. However, for some of the toxins, stagnation or a decrease in production was observed from day 7 to 14. As opposed to the A. tenuissima strains, the A. infectoria strain only produced low amounts of ATs, but high concentrations of stemphyltoxin III. The results provide an essential insight into the quantitative in vitro AT formation under different environmental conditions, potentially transferable to different field and storage conditions.
We recently demonstrated that the sympathetic nervous system can be voluntarily activated following a training program consisting of cold exposure, breathing exercises, and meditation. This resulted in profound attenuation of the systemic inflammatory response elicited by lipopolysaccharide (LPS) administration. Herein, we assessed whether this training program affects the plasma metabolome and if these changes are linked to the immunomodulatory effects observed. A total of 224 metabolites were identified in plasma obtained from 24 healthy male volunteers at six timepoints, of which 98 were significantly altered following LPS administration. Effects of the training program were most prominent shortly after initiation of the acquired breathing exercises but prior to LPS administration, and point towards increased activation of the Cori cycle. Elevated concentrations of lactate and pyruvate in trained individuals correlated with enhanced levels of anti-inflammatory interleukin (IL)-10. In vitro validation experiments revealed that co-incubation with lactate and pyruvate enhances IL-10 production and attenuates the release of pro-inflammatory IL-1 beta and IL-6 by LPS-stimulated leukocytes. Our results demonstrate that practicing the breathing exercises acquired during the training program results in increased activity of the Cori cycle. Furthermore, this work uncovers an important role of lactate and pyruvate in the anti-inflammatory phenotype observed in trained subjects.
During hopping an early burst can be observed in the EMG from the soleus muscle starting about 45 ms after touch-down. It may be speculated that this early EMG burst is a stretch reflex response superimposed on activity from a supra-spinal origin. We hypothesised that if a stretch reflex indeed contributes to the early EMG burst, then advancing or delaying the touch-down without the subject's knowledge should similarly advance or delay the burst. This was indeed the case when touch-down was advanced or delayed by shifting the height of a programmable platform up or down between two hops and this resulted in a correspondent shift of the early EMG burst. Our second hypothesis was that the motor cortex contributes to the first EMG burst during hopping. If so, inhibition of the motor cortex would reduce the magnitude of the burst. By applying a low-intensity magnetic stimulus it was possible to inhibit the motor cortex and this resulted in a suppression of the early EMG burst. These results suggest that sensory feedback and descending drive from the motor cortex are integrated to drive the motor neuron pool during the early EMG burst in hopping. Thus, simple reflexes work in concert with higher order structures to produce this repetitive movement.
Unlike for other retroviruses, only a few host cell factors that aid the replication of foamy viruses (FVs) via interaction with viral structural components are known. Using a yeast-two-hybrid (Y2H) screen with prototype FV (PFV) Gag protein as bait we identified human polo-like kinase 2 (hPLK2), a member of cell cycle regulatory kinases, as a new interactor of PFV capsids. Further Y2H studies confirmed interaction of PFV Gag with several PLKs of both human and rat origin. A consensus Ser-Thr/Ser-Pro (S-T/S-P) motif in Gag, which is conserved among primate FVs and phosphorylated in PFV virions, was essential for recognition by PLKs. In the case of rat PLK2, functional kinase and polo-box domains were required for interaction with PFV Gag. Fluorescently-tagged PFV Gag, through its chromatin tethering function, selectively relocalized ectopically expressed eGFP-tagged PLK proteins to mitotic chromosomes in a Gag STP motif-dependent manner, confirming a specific and dominant nature of the Gag-PLK interaction in mammalian cells. The functional relevance of the Gag-PLK interaction was examined in the context of replication-competent FVs and single-round PFV vectors. Although STP motif mutated viruses displayed wild type (wt) particle release, RNA packaging and intra-particle reverse transcription, their replication capacity was decreased 3-fold in single-cycle infections, and up to 20-fold in spreading infections over an extended time period. Strikingly similar defects were observed when cells infected with single-round wt Gag PFV vectors were treated with a pan PLK inhibitor. Analysis of entry kinetics of the mutant viruses indicated a post-fusion defect resulting in delayed and reduced integration, which was accompanied with an enhanced preference to integrate into heterochromatin. We conclude that interaction between PFV Gag and cellular PLK proteins is important for early replication steps of PFV within host cells.
We present a theoretical framework for the analysis of the statistical properties of thermal fluctuations on a lossy transmission line. A quantization scheme of the electrical signals in the transmission line is formulated. We discuss two applications in detail. Noise spectra at finite temperature for voltage and current are shown to deviate significantly from the Johnson-Nyquist limit, and they depend on the position on the transmission line. We analyze the spontaneous emission, at low temperature, of a Rydberg atom and its resonant enhancement due to vacuum fluctuations in a capacitively coupled transmission line. The theory can also be applied to study the performance of microscale and nanoscale devices, including high-resolution sensors and quantum information processors
We present a momentum transfer mechanism mediated by electromagnetic fields that originates in a system of two nearby molecules: one excited (donor D*) and the other in ground state (acceptor A). An intermolecular force related to fluorescence resonant energy or Forster transfer (FRET) arises in the unstable D* A molecular system, which differs from the equilibrium van der Waals interaction. Due to the its finite lifetime, a mechanical impulse is imparted to the relative motion in the system. We analyze the FRET impulse when the molecules are embedded in free space and find that its magnitude can be much greater than the single recoil photon momentum, getting comparable with the thermal momentum (Maxwell-Boltzmann distribution) at room temperature. In addition, we propose that this FRET impulse can be exploited in the generation of acoustic waves inside a film containing layers of donor and acceptor molecules, when a picosecond laser pulse excites the donors. This acoustic transient is distinguishable from that produced by thermal stress due to laser absorption, and may therefore play a role in photoacoustic spectroscopy. The effect can be seen as exciting a vibrating system like a string or organ pipe with light; it may be used as an opto-mechanical transducer.
Home range size and resource use of breeding and non-breeding white storks along a land use gradient
(2018)
Biotelemetry is increasingly used to study animal movement at high spatial and temporal resolution and guide conservation and resource management. Yet, limited sample sizes and variation in space and habitat use across regions and life stages may compromise robustness of behavioral analyses and subsequent conservation plans. Here, we assessed variation in (i) home range sizes, (ii) home range selection, and (iii) fine-scale resource selection of white storks across breeding status and regions and test model transferability. Three study areas were chosen within the Central German breeding grounds ranging from agricultural to fluvial and marshland. We monitored GPS-locations of 62 adult white storks equipped with solar-charged GPS/3D-acceleration (ACC) transmitters in 2013-2014. Home range sizes were estimated using minimum convex polygons. Generalized linear mixed models were used to assess home range selection and fine-scale resource selection by relating the home ranges and foraging sites to Corine habitat variables and normalized difference vegetation index in a presence/pseudo-absence design. We found strong variation in home range sizes across breeding stages with significantly larger home ranges in non-breeding compared to breeding white storks, but no variation between regions. Home range selection models had high explanatory power and well predicted overall density of Central German white stork breeding pairs. Also, they showed good transferability across regions and breeding status although variable importance varied considerably. Fine-scale resource selection models showed low explanatory power. Resource preferences differed both across breeding status and across regions, and model transferability was poor. Our results indicate that habitat selection of wild animals may vary considerably within and between populations, and is highly scale dependent. Thereby, home range scale analyses show higher robustness whereas fine-scale resource selection is not easily predictable and not transferable across life stages and regions. Such variation may compromise management decisions when based on data of limited sample size or limited regional coverage. We thus recommend home range scale analyses and sampling designs that cover diverse regional landscapes and ensure robust estimates of habitat suitability to conserve wild animal populations.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
SDM performance varied for different range dynamics. Prediction accuracies decreased when abrupt range shifts occurred as species were outpaced by the rate of climate change, and increased again when a new equilibrium situation was realised. When ranges contracted, prediction accuracies increased as the absences were predicted well. Far- dispersing species were faster in tracking climate change, and were predicted more accurately by SDMs than short- dispersing species. BRTs mostly outperformed GLMs. The presence of a predator, and the inclusion of its incidence as an environmental predictor, made BRTs and GLMs perform similarly. Results are discussed in light of other studies dealing with effects of ecological traits and processes on SDM performance. Perspectives are given on further advancements of SDMs and for possible interfaces with more mechanistic approaches in order to improve predictions under environmental change.
Empirical species distribution models (SDMs) constitute often the tool of choice for the assessment of rapid climate change effects on species vulnerability. Conclusions regarding extinction risks might be misleading, however, because SDMs do not explicitly incorporate dispersal or other demographic processes. Here, we supplement SDMs with a dynamic population model 1) to predict climate-induced range dynamics for black grouse in Switzerland, 2) to compare direct and indirect measures of extinction risks, and 3) to quantify uncertainty in predictions as well as the sources of that uncertainty. To this end, we linked models of habitat suitability to a spatially explicit, individual-based model. In an extensive sensitivity analysis, we quantified uncertainty in various model outputs introduced by different SDM algorithms, by different climate scenarios and by demographic model parameters. Potentially suitable habitats were predicted to shift uphill and eastwards. By the end of the 21st century, abrupt habitat losses were predicted in the western Prealps for some climate scenarios. In contrast, population size and occupied area were primarily controlled by currently negative population growth and gradually declined from the beginning of the century across all climate scenarios and SDM algorithms. However, predictions of population dynamic features were highly variable across simulations. Results indicate that inferring extinction probabilities simply from the quantity of suitable habitat may underestimate extinction risks because this may ignore important interactions between life history traits and available habitat. Also, in dynamic range predictions uncertainty in SDM algorithms and climate scenarios can become secondary to uncertainty in dynamic model components. Our study emphasises the need for principal evaluation tools like sensitivity analysis in order to assess uncertainty and robustness in dynamic range predictions. A more direct benefit of such robustness analysis is an improved mechanistic understanding of dynamic species responses to climate change.
Data limitations can lead to unrealistic fits of predictive species distribution models (SDMs) and spurious extrapolation to novel environments. Here, we want to draw attention to novel combinations of environmental predictors that are within the sampled range of individual predictors but are nevertheless outside the sample space. These tend to be overlooked when visualizing model behaviour. They may be a cause of differing model transferability and environmental change predictions between methods, a problem described in some studies but generally not well understood. We here use a simple simulated data example to illustrate the problem and provide new and complementary visualization techniques to explore model behaviour and predictions to novel environments. We then apply these in a more complex real-world example. Our results underscore the necessity of scrutinizing model fits, ecological theory and environmental novelty.
Density regulation influences population dynamics through its effects on demographic rates and consequently constitutes a key mechanism explaining the response of organisms to environmental changes. Yet, it is difficult to establish the exact form of density dependence from empirical data. Here, we developed an individual-based model to explore how resource limitation and behavioural processes determine the spatial structure of white stork Ciconia ciconia populations and regulate reproductive rates. We found that the form of density dependence differed considerably between landscapes with the same overall resource availability and between home range selection strategies, highlighting the importance of fine-scale resource distribution in interaction with behaviour. In accordance with theories of density dependence, breeding output generally decreased with density but this effect was highly variable and strongly affected by optimal foraging strategy, resource detection probability and colonial behaviour. Moreover, our results uncovered an overlooked consequence of density dependence by showing that high early nestling mortality in storks, assumed to be the outcome of harsh weather, may actually result from density dependent effects on food provision. Our findings emphasize that accounting for interactive effects of individual behaviour and local environmental factors is crucial for understanding density-dependent processes within spatially structured populations. Enhanced understanding of the ways animal populations are regulated in general, and how habitat conditions and behaviour may dictate spatial population structure and demographic rates is critically needed for predicting the dynamics of populations, communities and ecosystems under changing environmental conditions.
Ecologists carry a well-stocked toolbox with a great variety of sampling methods, statistical analyses and modelling tools, and new methods are constantly appearing. Evaluation and optimisation of these methods is crucial to guide methodological choices. Simulating error-free data or taking high-quality data to qualify methods is common practice. Here, we emphasise the methodology of the 'virtual ecologist' (VE) approach where simulated data and observer models are used to mimic real species and how they are 'virtually' observed. This virtual data is then subjected to statistical analyses and modelling, and the results are evaluated against the 'true' simulated data. The VE approach is an intuitive and powerful evaluation framework that allows a quality assessment of sampling protocols, analyses and modelling tools. It works under controlled conditions as well as under consideration of confounding factors such as animal movement and biased observer behaviour. In this review, we promote the approach as a rigorous research tool, and demonstrate its capabilities and practical relevance. We explore past uses of VE in different ecological research fields, where it mainly has been used to test and improve sampling regimes as well as for testing and comparing models, for example species distribution models. We discuss its benefits as well as potential limitations, and provide some practical considerations for designing VE studies. Finally, research fields are identified for which the approach could be useful in the future. We conclude that VE could foster the integration of theoretical and empirical work and stimulate work that goes far beyond sampling methods, leading to new questions, theories, and better mechanistic understanding of ecological systems.
The ability of some plant species to dominate communities in new biogeographical ranges has been attributed to an innate higher competitive ability and release from co-evolved specialist enemies. Specifically, invasive success in the new range might be explained by release from biotic negative soil-feedbacks, which control potentially dominant species in their native range. To test this hypothesis, we grew individuals from sixteen phylogenetically paired European grassland species that became either invasive or naturalized in new ranges, in either sterilized soil or in sterilized soil with unsterilized soil inoculum from their native home range. We found that although the native members of invasive species generally performed better than those of naturalized species, these native members of invasive species also responded more negatively to native soil inoculum than did the native members of naturalized species. This supports our hypothesis that potentially invasive species in their native range are held in check by negative soil-feedbacks. However, contrary to expectation, negative soil-feedbacks in potentially invasive species were not much increased by interspecific competition. There was no significant variation among families between invasive and naturalized species regarding their feedback response (negative vs. neutral). Therefore, we conclude that the observed negative soil feedbacks in potentially invasive species may be quite widespread in European families of typical grassland species.
Bacterial molybdoenzymes are key enzymes involved in the global sulphur, nitrogen and carbon cycles. These enzymes require the insertion of the molybdenum cofactor (Moco) into their active sites and are able to catalyse a large range of redox-reactions. Escherichia coli harbours nineteen different molybdoenzymes that require a tight regulation of their synthesis according to substrate availability, oxygen availability and the cellular concentration of molybdenum and iron. The synthesis and assembly of active molybdoenzymes are regulated at the level of transcription of the structural genes and of translation in addition to the genes involved in Moco biosynthesis. The action of global transcriptional regulators like FNR, NarXL/QP, Fur and ArcA and their roles on the expression of these genes is described in detail. In this review we focus on what is known about the molybdenum- and iron-dependent regulation of molybdoenzyme and Moco biosynthesis genes in the model organism E. coli. The gene regulation in E. coli is compared to two other well studied model organisms Rhodobacter capsulatus and Shewanella oneidensis.
Molybdenum cofactor (Moco) biosynthesis is a complex process that involves the coordinated function of several proteins. In recent years it has become obvious that the availability of iron plays an important role in the biosynthesis of Moco. First, the MoaA protein binds two (4Fe-4S] clusters per monomer. Second, the expression of the moaABCDE and moeAB operons is regulated by FNR, which senses the availability of oxygen via a functional NFe-4S) cluster. Finally, the conversion of cyclic pyranopterin monophosphate to molybdopterin requires the availability of the L-cysteine desulfurase IscS, which is a shared protein with a main role in the assembly of Fe-S clusters. In this report, we investigated the transcriptional regulation of the moaABCDE operon by focusing on its dependence on cellular iron availability. While the abundance of selected molybdoenzymes is largely decreased under iron-limiting conditions, our data show that the regulation of the moaABCDE operon at the level of transcription is only marginally influenced by the availability of iron. Nevertheless, intracellular levels of Moco were decreased under iron-limiting conditions, likely based on an inactive MoaA protein in addition to lower levels of the L-cysteine desulfurase IscS, which simultaneously reduces the sulfur availability for Moco production. IMPORTANCE FNR is a very important transcriptional factor that represents the master switch for the expression of target genes in response to anaerobiosis. Among the FNR-regulated operons in Escherichia coli is the moaABCDE operon, involved in Moco biosynthesis. Molybdoenzymes have essential roles in eukaryotic and prokaryotic organisms. In bacteria, molybdoenzymes are crucial for anaerobic respiration using alternative electron acceptors. This work investigates the connection of iron availability to the biosynthesis of Moco and the production of active molybdoenzymes.
The c-Fosc-Jun complex forms the activator protein 1 transcription factor, a therapeutic target in the treatment of cancer. Various synthetic peptides have been designed to try to selectively disrupt the interaction between c-Fos and c-Jun at its leucine zipper domain. To evaluate the binding affinity between these synthetic peptides and c-Fos, polarizable and nonpolarizable molecular dynamics (MD) simulations were conducted, and the resulting conformations were analyzed using the molecular mechanics generalized Born surface area (MM/GBSA) method to compute free energies of binding. In contrast to empirical and semiempirical approaches, the estimation of free energies of binding using a combination of MD simulations and the MM/GBSA approach takes into account dynamical properties such as conformational changes, as well as solvation effects and hydrophobic and hydrophilic interactions. The predicted binding affinities of the series of c-Jun-based peptides targeting the c-Fos peptide show good correlation with experimental melting temperatures. This provides the basis for the rational design of peptides based on internal, van der Waals, and electrostatic interactions.
State-of-the-art organic solar cells exhibit power conversion efficiencies of 18% and above. These devices benefit from the suppression of free charge recombination with regard to the Langevin limit of charge encounter in a homogeneous medium. It is recognized that the main cause of suppressed free charge recombination is the reformation and resplitting of charge-transfer (CT) states at the interface between donor and acceptor domains. Here, we use kinetic Monte Carlo simulations to understand the interplay between free charge motion and recombination in an energetically disordered phase-separated donor-acceptor blend. We identify conditions for encounter-dominated and resplitting-dominated recombination. In the former regime, recombination is proportional to mobility for all parameters tested and only slightly reduced with respect to the Langevin limit. In contrast, mobility is not the decisive parameter that determines the nongeminate recombination coefficient, k(2), in the latter case, where k2 is a sole function of the morphology, CT and charge-separated (CS) energetics, and CT-state decay properties. Our simulations also show that free charge encounter in the phase-separated disordered blend is determined by the average mobility of all carriers, while CT reformation and resplitting involves mostly states near the transport energy. Therefore, charge encounter is more affected by increased disorder than the resplitting of the CT state. As a consequence, for a given mobility, larger energetic disorder, in combination with a higher hopping rate, is preferred. These findings have implications for the understanding of suppressed recombination in solar cells with nonfullerene acceptors, which are known to exhibit lower energetic disorder than that of fullerenes.
Explicit solution of the Lindblad equation for nearly isotropic boundary driven XY spin 1/2 chain
(2010)
Explicit solution for the two-point correlation function in a non-equilibrium steady state of a nearly isotropic boundary driven open XY spin 1/2 chain in the Lindblad formulation is provided. A non-equilibrium quantum phase transition from exponentially decaying correlations to long range order is discussed analytically. In the regime of long range order a new phenomenon of correlation resonances is reported, where the correlation response of the system is unusually high for certain discrete values of the external bulk parameter, e.g. the magnetic field.
Background: Protein kinases constitute a particularly large protein family in Arabidopsis with important functions in cellular signal transduction networks. At the same time Arabidopsis is a model plant with high frequencies of gene duplications. Here, we have conducted a systematic analysis of the Arabidopsis kinase complement, the kinome, with particular focus on gene duplication events. We matched Arabidopsis proteins to a Hidden-Markov Model of eukaryotic kinases and computed a phylogeny of 942 Arabidopsis protein kinase domains and mapped their origin by gene duplication.
Results: The phylogeny showed two major clades of receptor kinases and soluble kinases, each of which was divided into functional subclades. Based on this phylogeny, association of yet uncharacterized kinases to families was possible which extended functional annotation of unknowns. Classification of gene duplications within these protein kinases revealed that representatives of cytosolic subfamilies showed a tendency to maintain segmentally duplicated genes, while some subfamilies of the receptor kinases were enriched for tandem duplicates. Although functional diversification is observed throughout most subfamilies, some instances of functional conservation among genes transposed from the same ancestor were observed. In general, a significant enrichment of essential genes was found among genes encoding for protein kinases.
Conclusions: The inferred phylogeny allowed classification and annotation of yet uncharacterized kinases. The prediction and analysis of syntenic blocks and duplication events within gene families of interest can be used to link functional biology to insights from an evolutionary viewpoint. The approach undertaken here can be applied to any gene family in any organism with an annotated genome.
Marine sedimentary archives are routinely used to reconstruct past environmental changes. In many cases, bioturbation and sedimentary mixing affect the proxy time-series and the age-depth relationship. While idealized models of bioturbation exist, they usually assume homogeneous mixing, thus that a single sample is representative for the sediment layer it is sampled from.
However, it is largely unknown to which extent this assumption holds for sediments used for paleoclimate reconstructions.
To shed light on
1) the age-depth relationship and its full uncertainty,
2) the magnitude of mixing processes affecting the downcore proxy variations, and
3) the representativity of the discrete sample for the sediment layer, we designed and performed a case study on South China Sea sediment material which was collected using a box corer and which covers the last glacial cycle.
Using the radiocarbon content of foraminiferal tests as a tracer of time, we characterize the spatial age-heterogeneity of sediments in a three-dimensional setup. In total, 118 radiocarbon measurements were performed on defined small- and large-volume bulk samples ( similar to 200 specimens each) to investigate the horizontal heterogeneity of the sediment. Additionally, replicated measurements on small numbers of specimens (10 x 5 specimens) were performed to assess the heterogeneity within a sample volume. Visual assessment of X-ray images and a quantitative assessment of the mixing strength show typical mixing from bioturbation corresponding to around 10 cm mixing depth.
Notably, our 3D radiocarbon distribution reveals that the horizontal heterogeneity (up to 1,250 years), contributing to the age uncertainty, is several times larger than the typically assumed radiocarbon based age-model error (single errors up to 250 years). Furthermore, the assumption of a perfectly bioturbated layer with no mixing underneath is not met.
Our analysis further demonstrates that the age-heterogeneity might be a function of sample size; smaller samples might contain single features from the incomplete mixing and are thus less representative than larger samples.
We provide suggestions for future studies, optimal sampling strategies for quantitative paleoclimate reconstructions and realistic uncertainty in age models, as well as discuss possible implications for the interpretation of paleoclimate records.
Der Kulturkritiker und Schriftsteller Max Nordau : zwischen Zionismus, Deutschtum und Judentum
(2001)
In high-value sweet cherry (Prunus avium), the red coloration - determined by the anthocyanins content - is correlated with the fruit ripeness stage and market value. Non-destructive spectroscopy has been introduced in practice and may be utilized as a tool to assess the fruit pigments in the supply chain processes. From the fruit spectrum in the visible (Vis) wavelength range, the pigment contents are analyzed separately at their specific absorbance wavelengths.
A drawback of the method is the need for re-calibration due to varying optical properties of the fruit tissue. In order to correct for the scattering differences, most often the spectral intensity in the visible spectrum is normalized by wavelengths in the near infrared (NIR) range, or pre-processing methods are applied in multivariate calibrations.
In the present study, the influence of the fruit scattering properties on the Vis/NIR fruit spectrum were corrected by the effective pathlength in the fruit tissue obtained from time-resolved readings of the distribution of time-of-flight (DTOF). Pigment analysis was carried out according to Lambert-Beer law, considering fruit spectral intensities, effective pathlength, and refractive index. Results were compared to commonly applied linear color and multivariate partial least squares (PLS) regression analysis. The approaches were validated on fruits at different ripeness stages, providing variation in the scattering coefficient and refractive index exceeding the calibration sample set.
In the validation, the measuring uncertainty of non-destructively analyzing fruits with Vis/NIR spectra by means of PLS or Lambert-Beer in comparison with combined application of Vis/NIR spectroscopy and DTOF measurements showed a dramatic bias reduction as well as enhanced coefficients of determination when using both, the spectral intensities and apparent information on the scattering influence by means of DTOF readings. Corrections for the refractive index did not render improved results.
Decoupling of optical properties appears challenging, but vital to get better insight of the relationship between light and fruit attributes. In this study, nine solid phantoms capturing the ranges of absorption (μa) and reduced scattering (μs’) coefficients in fruit were analysed non-destructively using laser-induced backscattering imaging (LLBI) at 1060 nm. Data analysis of LLBI was carried out on the diffuse reflectance, attenuation profile obtained by means of Farrell’s diffusion theory either calculating μa [cm−1] and μs’ [cm−1] in one fitting step or fitting only one optical variable and providing the other one from a destructive analysis. The nondestructive approach was approved when calculating one unknown coefficient non-destructively, while no ability of the method was found to analysis both, μa and μs’, non-destructively. Setting μs’ according to destructive photon density wave (PDW) spectroscopy and fitting μa resulted in root mean square error (rmse) of 18.7% in comparison to fitting μs’ resulting in rmse of 2.6%, pointing to decreased measuring uncertainty, when the highly variable μa was known.
The approach was tested on European pear, utilizing destructive PDW spectroscopy for setting one variable, while LLBI was applied for calculating the remaining coefficient. Results indicated that the optical properties of pear obtained from PDW spectroscopy as well as LLBI changed concurrently in correspondence to water content mainly. A destructive batch-wise analysis of μs’ and online analysis of μa may be considered in future developments for improved fruit sorting results, when considering fruit with high variability of μs’.
The Lombok Island is part of the Lesser Sunda Islands (LSI) region – Indonesia, situated along the Sunda-Banda Arcs transition. It lies between zones characterized by the highest intensity geomagnetic anomalies of this region, remarkable as one of the eight most important features provided on the 1st edition of World Digital Magnetic Anomaly Map. The seismicity of this region during the last years is high, while the geological and tectonic structures of this region are still not known in detail. Some local magnetic surveys have been conducted previously during 2004–2005. However, due to the lower accuracy of the used equipment and a limited number of stations, the qualities of the previous measurements are questionable for more interpretations. Thus a more detailed study to better characterize the geomagnetic anomaly -spatially and temporally- over this region and to deeply explore the related regional geology, tectonic and seismicity is needed. The intriguing geomagnetic anomalies over this island region vis-à-vis the socio-cultural situations lead to a study with a special aim to contribute to the assessment of the potential of natural hazards (earthquakes) as well as a new natural resource of energy (geothermal potential).
This study is intended to discuss several crucial questions, including:
i. The real values and the general pattern of magnetic anomalies over the island, as well as their relation to the regional one.
ii. Any temporal changes of regional anomalies over the recent time.
iii. The relationships between the anomalies and the geology and tectonic of this region, especially new insights that can be gained from the geomagnetic observations.
iv. The relationships between the anomalies and the high seismicity of this region, especially some possible links between their variations to the earthquake occurrence.
First, all available geomagnetic data of this region and results of the previous measurements are evaluated. The new geomagnetic surveys carried out in 2006 and 2007/2008 are then presented in detail, followed by the general description of data processing and data quality evaluation. The new results show the general pattern of contiguous negative-positive anomalies, revealing an active arc related subduction region. They agree with earlier results obtained by satellite, aeromagnetic, and marine platforms; and provide a much more detailed picture of the strong anomalies on this island. The temporal characteristics of regional anomalies show a decreasing strength of the dipolar structure, where decreasing of the field intensities is faster than the regional secular variations as defined by the global model (the 10th generation of IGRF). However, some exceptions (increasing of anomalies) have to be noted and further analyzed for several locations.
Thereafter, simultaneous magnetic anomalies and gravity models are generated and interpreted in detail. Three profiles are investigated, providing new insights into the tectonics and geological evolution of the Lombok Island. Geological structure of this island can be divided as two main parts with different consecutive ages: an old part (from late Oligocene to late Miocene) in the South and a younger one (from Pliocene to Holocene) in the North. A new subduction in the back arc region (the Flores Thrust zone) is considered mature and active, showing a tendency of progressive subduction during 2005–2008. Geothermal potential in the northern part of this island can be mapped in more detail using these geomagnetic regional survey data. The earlier estimates of reservoir depth can be confirmed further to a depth of about 800 m. Evaluation of temporal changes of the anomalies gives some possible explanations related to the evolution of the back arc region, large stress accumulations over the LSI region, a specific electrical characteristic of the crust of the Lombok Island region, and a structural discontinuity over this island.
Based on the results, several possible advanced studies involving geomagnetic data and anomaly investigations over the Lombok Island region can be suggested for the future:
i. Monitoring the subduction activity of the back arc region (the Flores Thrust zone) and the accumulated stress over the LSI, that could contribute to middle term hazard assessment with a special attention to the earthquake occurrence in this region. Continuous geomagnetic field measurements from a geomagnetic observatory which can be established in the northern part of the Lombok Island and systematic measurements at several repeat stations can be useful in this regards.
ii. Investigating the specific electrical characteristic (high conductivity) of the crust, that is probably related to some aquifer layers or metal mineralization. It needs other complementary geophysical methods, such as magnetotelluric (MT) or preferably DC resistivity measurements.
iii. Determining the existence of an active structural fault over the Lombok Island, that could be related to long term hazard assessment over the LSI region. This needs an extension of geomagnetic investigations over the neighbouring islands (the Bali Island in the West and the Sumbawa Island in the East; probably also the Sumba and the Flores islands). This seems possible because the regional magnetic lineations might be used to delineate some structural discontinuities, based on the modelling of contrasts in crustal magnetizations.
Anti-fat bias is widespread and is linked to the internalization of weight bias and psychosocial problems. The purpose of this study was to examine the internalization of weight bias among children across weight categories and to evaluate the psychometric properties of the Weight Bias Internalization Scale for Children (WBIS-C). Data were collected from 1484 primary school children and their parents. WBIS-C demonstrated good internal consistency (alpha = .86) after exclusion of Item 1. The unitary factor structure was supported using exploratory and confirmatory factor analyses (factorial validity). Girls and overweight children reported higher WBIS-C scores in comparison to boys and non-overweight peers (known-groups validity). Convergent validity was shown by significant correlations with psychosocial problems. Internalization of weight bias explained additional variance in different indicators of psychosocial well-being. The results suggest that the WBIS-C is a psychometrically sound and informative tool to assess weight bias internalization among children.
Weight-related teasing is a widespread phenomenon in childhood, and might foster the internalization of weight bias. The goal of this study was to examine the role of weight teasing and weight bias internalization as mediators between weight status and negative psychological sequelae, such as restrained eating and emotional and conduct problems in childhood. Participants included 546 female (52%) and 501 (48%) male children aged 7-11 and their parents, who completed surveys assessing weight teasing, weight bias internalization, restrained eating behaviors, and emotional and conduct problems at two points of measurement, approximately 2 years apart. To examine the hypothesized mediation, a prospective design using structural equation modeling was applied. As expected, the experience of weight teasing and the internalization of weight bias were mediators in the relationship between weight status and psychosocial problems. This pattern was observed independently of gender or weight status. Our findings suggest that the experience of weight teasing and internalization of weight bias is more important than weight status in explaining psychological functioning among children and indicate a need for appropriate prevention and intervention approaches.
The tremendous success of metal-halide perovskites, especially in the field of photovoltaics, has triggered a substantial number of studies in understanding their optoelectronic properties. However, consensus regarding the electronic properties of these perovskites is lacking due to a huge scatter in the reported key parameters, such as work function (Φ) and valence band maximum (VBM) values. Here, we demonstrate that the surface photovoltage (SPV) is a key phenomenon occurring at the perovskite surfaces that feature a non-negligible density of surface states, which is more the rule than an exception for most materials under study. With ultraviolet photoelectron spectroscopy (UPS) and Kelvin probe, we evidence that even minute UV photon fluxes (500 times lower than that used in typical UPS experiments) are sufficient to induce SPV and shift the perovskite Φ and VBM by several 100 meV compared to dark. By combining UV and visible light, we establish flat band conditions (i.e., compensate the surface-state-induced surface band bending) at the surface of four important perovskites, and find that all are p-type in the bulk, despite a pronounced n-type surface character in the dark. The present findings highlight that SPV effects must be considered in all surface studies to fully understand perovskites’ photophysical properties.
In contrast to the common conception that the interfacial energy-level alignment is affixed once the interface is formed, we demonstrate that heterojunctions between organic semiconductors and metal-halide perovskites exhibit huge energy-level realignment during photoexcitation. Importantly, the photoinduced level shifts occur in the organic component, including the first molecular layer in direct contact with the perovskite. This is caused by charge-carrier accumulation within the organic semiconductor under illumination and the weak electronic coupling between the junction components.
The remarkable progress of metal halide perovskites in photovoltaics has led to the power conversion efficiency approaching 26%. However, practical applications of perovskite-based solar cells are challenged by the stability issues, of which the most critical one is photo-induced degradation. Bare CH3NH3PbI3 perovskite films are known to decompose rapidly, with methylammonium and iodine as volatile species and residual solid PbI2 and metallic Pb, under vacuum under white light illumination, on the timescale of minutes. We find, in agreement with previous work, that the degradation is non-uniform and proceeds predominantly from the surface, and that illumination under N-2 and ambient air (relative humidity 20%) does not induce substantial degradation even after several hours. Yet, in all cases the release of iodine from the perovskite surface is directly identified by X-ray photoelectron spectroscopy. This goes in hand with a loss of organic cations and the formation of metallic Pb. When CH3NH3PbI3 films are covered with a few nm thick organic capping layer, either charge selective or non-selective, the rapid photodecomposition process under ultrahigh vacuum is reduced by more than one order of magnitude, and becomes similar in timescale to that under N-2 or air. We conclude that the light-induced decomposition reaction of CH3NH3PbI3, leading to volatile methylammonium and iodine, is largely reversible as long as these products are restrained from leaving the surface. This is readily achieved by ambient atmospheric pressure, as well as a thin organic capping layer even under ultrahigh vacuum. In addition to explaining the impact of gas pressure on the stability of this perovskite, our results indicate that covalently "locking" the position of perovskite components at the surface or an interface should enhance the overall photostability.
Photovoltaic cells based on halide perovskites, possessing remarkably high power conversion efficiencies have been reported. To push the development of such devices further, a comprehensive and reliable understanding of their electronic properties is essential but presently not available. To provide a solid foundation for understanding the electronic properties of polycrystalline thin films, we employ single-crystal band structure data from angle-resolved photoemission measurements. For two prototypical perovskites (CH3NH3PbBr3 and CH3NH3PbI3), we reveal the band dispersion in two high-symmetry directions and identify the global valence band maxima. With these benchmark data, we construct "standard" photoemission spectra from polycrystalline thin film samples and resolve challenges discussed in the literature for determining the valence band onset with high reliability. Within the framework laid out here, the consistency of relating the energy level alignment in perovskite-based photovoltaic and optoelectronic devices with their functional parameters is substantially enhanced.
During the period 750-600 Ma ago, prior to the final break-up of the supercontinent Rodinia, the crust of both the North American Craton and Baltica was intruded by significant amounts of rift-related magmas originating from the mantle. In the Proterozoic crust of Southern Norway, the 580 Ma old Fen carbonatite-ultramafic complex is a representative of this type of rocks. In this paper, we report the occurrence of an ultramafic lamprophyre dyke which possibly is linked to the Fen complex, although Ar-40/Ar-39 data from phenocrystic phlogopite from the dyke gave an age of 686 +/- 9 Ma. The lamprophyre dyke was recently discovered in one of the Kongsberg silver mines at Vinoren, Norway. Whole rock geochemistry, geochronological and mineralogical data from the ultramafic lamprophyre dyke are presented aiming to elucidate its origin and possible geodynamic setting. From the whole-rock composition of the Vinoren dyke, the rock could be recognized as transitional between carbonatite and kimberlite-II (orangeite). From its diagnostic mineralogy, the rock is classified as aillikite. The compositions and xenocrystic nature of several of the major and accessory minerals from the Vinoren aillikite are characteristic for diamondiferous rocks (kimberlites/lamproites/UML): Phlogopite with kinoshitalite-rich rims, chromite-spinel-ulvospinel series, Mg- and Mn-rich ilmenites, rutile and lucasite-(Ce). We suggest that the aillikite melt formed during partial melting of a MARID (mica-amphibole-rutile-ilmenite-diopside)-like source under CO2 fluxing. The pre-rifting geodynamic setting of the Vinoren aillikite before the Rodinia supercontinent breakup suggests a relatively thick SCLM (Subcontinental Lithospheric Mantle) during this stage and might indicate a diamond-bearing source for the parental melt. This is in contrast to the about 100 Ma younger Fen complex, which were derived from a thin SCLM.
Objective: We investigated the effects of combined balance and strength training on measures of balance and muscle strength in older women with a history of falls.
Methods: Twenty-seven older women aged 70.4 ± 4.1 years (age range: 65 to 75 years) were randomly allocated to either an intervention (IG, n = 12) or an active control (CG, n = 15) group. The IG completed 8 weeks combined balance and strength training program with three sessions per week including visual biofeedback using force plates. The CG received physical therapy and gait training at a rehabilitation center. Training volumes were similar between the groups. Pre and post training, tests were applied for the assessment of muscle strength (weight-bearing squat [WBS] by measuring the percentage of body mass borne by each leg at different knee flexions [0°, 30°, 60°, and 90°], sit-to-stand test [STS]), and balance. Balance tests used the modified clinical test of sensory interaction (mCTSIB) with eyes closed (EC) and opened (EO), on stable (firm) and unstable (foam) surfaces as well as spatial parameters of gait such as step width and length (cm) and walking speed (cm/s).
Results: Significant group × time interactions were found for different degrees of knee flexion during WBS (0.0001 < p < 0.013, 0.441 < d < 0.762). Post hoc tests revealed significant pre-to-post improvements for both legs and for all degrees of flexion (0.0001 < p < 0.002, 0.697 < d < 1.875) for IG compared to CG. Significant group × time interactions were found for firm EO, foam EO, firm EC, and foam EC (0.006 < p < 0.029; 0.302 < d < 0.518). Post hoc tests showed significant pre-to-post improvements for both legs and for all degrees of oscillations (0.0001 < p < 0.004, 0.753 < d < 2.097) for IG compared to CG. This study indicates that combined balance and strength training improved percentage distribution of body weight between legs at different conditions of knee flexion (0°, 30°, 60°, and 90°) and also decreased the sway oscillation on a firm surface with eyes closed, and on foam surface (with eyes opened or closed) in the IG.
Conclusion: The higher positive effects of training seen in standing balance tests, compared with dynamic tests, suggests that balance training exercises including lateral, forward, and backward exercises improved static balance to a greater extent in older women.