Refine
Year of publication
- 2019 (2019) (remove)
Document Type
- Article (1375)
- Postprint (202)
- Doctoral Thesis (177)
- Other (145)
- Review (52)
- Monograph/Edited Volume (17)
- Working Paper (15)
- Part of a Book (11)
- Conference Proceeding (8)
- Habilitation Thesis (7)
Language
- English (2019) (remove)
Is part of the Bibliography
- yes (2019) (remove)
Keywords
- climate change (15)
- morphology (15)
- diffusion (12)
- linguistics (12)
- syntax (12)
- Informationsstruktur (11)
- Morphologie (11)
- information structure (11)
- Festschrift (10)
- Linguistik (10)
Institute
- Institut für Biochemie und Biologie (336)
- Institut für Physik und Astronomie (308)
- Institut für Geowissenschaften (274)
- Institut für Chemie (173)
- Department Psychologie (103)
- Institut für Ernährungswissenschaft (85)
- Institut für Umweltwissenschaften und Geographie (74)
- Hasso-Plattner-Institut für Digital Engineering GmbH (64)
- Department Linguistik (59)
- Institut für Mathematik (59)
Monitoring is a key functionality for automated decision making as it is performed by self-adaptive systems, too. Effective monitoring provides the relevant information on time. This can be achieved with exhaustive monitoring causing a high overhead consumption of economical and ecological resources. In contrast, our generic adaptive monitoring approach supports effectiveness with increased efficiency. Also, it adapts to changes regarding the information demand and the monitored system without additional configuration and software implementation effort. The approach observes the executions of runtime model queries and processes change events to determine the currently required monitoring configuration. In this paper we explicate different possibilities to use the approach and evaluate their characteristics regarding the phenomenon detection time and the monitoring effort. Our approach allows balancing between those two characteristics. This makes it an interesting option for the monitoring function of self-adaptive systems because for them usually very short-lived phenomena are not relevant.
Working in iterations and repeatedly improving team workflows based on collected feedback is fundamental to agile software development processes. Scrum, the most popular agile method, provides dedicated retrospective meetings to reflect on the last development iteration and to decide on process improvement actions. However, agile methods do not prescribe how these improvement actions should be identified, managed or tracked in detail. The approaches to detect and remove problems in software development processes are therefore often based on intuition and prior experiences and perceptions of team members. Previous research in this area has focused on approaches to elicit a team's improvement opportunities as well as measurements regarding the work performed in an iteration, e.g. Scrum burn-down charts. Little research deals with the quality and nature of identified problems or how progress towards removing issues is measured. In this research, we investigate how agile development teams in the professional software industry organize their feedback and process improvement approaches. In particular, we focus on the structure and content of improvement and reflection meetings, i.e. retrospectives, and their outcomes. Researching how the vital mechanism of process improvement is implemented in practice in modern software development leads to a more complete picture of agile process improvement.
Feedback in Scrum
(2019)
Improving the way that teams work together by reflecting and improving the executed process is at the heart of agile processes. The idea of iterative process improvement takes various forms in different agile development methodologies, e.g. Scrum Retrospectives. However, these methods do not prescribe how improvement steps should be conducted in detail. In this research we investigate how agile software teams can use their development data, such as commits or tickets, created during regular development activities, to drive and track process improvement steps. Our previous research focused on data-informed process improvement in the context of student teams, where controlled circumstances and deep domain knowledge allowed creation and usage of specific process measures. Encouraged by positive results in this area, we investigate the process improvement approaches employed in industry teams. Researching how the vital mechanism of process improvement is implemented and how development data is already being used in practice in modern software development leads to a more complete picture of agile process improvement. It is the first step in enabling a data-informed feedback and improvement process, tailored to a team's context and based on the development data of individual teams.
By adapting the Cheeger-Simons approach to differential cohomology, we establish a notion of differential cohomology with compact support. We show that it is functorial with respect to open embeddings and that it fits into a natural diagram of exact sequences which compare it to compactly supported singular cohomology and differential forms with compact support, in full analogy to ordinary differential cohomology. We prove an excision theorem for differential cohomology using a suitable relative version. Furthermore, we use our model to give an independent proof of Pontryagin duality for differential cohomology recovering a result of [Harvey, Lawson, Zweck - Amer. J. Math. 125 (2003), 791]: On any oriented manifold, ordinary differential cohomology is isomorphic to the smooth Pontryagin dual of compactly supported differential cohomology. For manifolds of finite-type, a similar result is obtained interchanging ordinary with compactly supported differential cohomology.
The nuclear envelope consists of the outer and the inner nuclear membrane, the nuclear lamina and the nuclear pore complexes, which regulate nuclear import and export.The major constituent of the nuclear lamina of Dictyostelium is the lamin NE81. It can form filaments like B-type lamins and it interacts with Sun 1, as well as with the LEM/HeH-family protein Src1. Sun 1 and Src1 are nuclear envelope transmembrane proteins involved in the centrosome-nucleus connection and nuclear envelope stability at the nucleolar regions, respectively. In conjunction with a KASH-domain protein, Sun 1 usually forms a so-called LINC complex.Two proteins with functions reminiscent of KASH-domain proteins at the outer nuclear membrane of Dictyostelium are known; interaptin which serves as an actin connector and the kinesin Kif9 which plays a role in the microtubule-centrosome connector. However, both of these lack the conserved KASH-domain. The link of the centrosome to the nuclear envelope is essential for the insertion of the centrosome into the nuclear envelope and the appropriate spindle formation. Moreover, centrosome insertion is involved in perm eabilization of the mitotic nucleus, which ensures access of tubulin dimers and spindle assembly factors. Our recent progress in identifying key molecular players at the nuclear envelope of Dictyostelium promises further insights into the mechanisms of nuclear envelope dynamics.
alt'ai is an agent-based simulation inspired by aesthetics, culture and environmental conditions of the Altai mountain region on the borders between Russia, Kazakhstan, China and Mongolia. It is set into a scenario of a remote automated landscape populated by sentient machines, where biological species, machines and environments autonomously interact to produce unforeseeable visual outputs. It poses a question of designing future machine-to-machine authentication protocols that are based on the use of images encoding agent behavior. Also, the simulation provides rich visual perspective on this challenge. The project pleads for a heavily aestheticized approach to design practice and highlights the importance of productively inefficient and information redundant systems.
The Schwarzenberg mining district in the western Erzgebirge hosts numerous skarn-hosted tin-polymetallic deposits, such as Breitenbrunn. The St. Christoph mine is located in the Breitenbrunn deposit and is the locus typicus of christophite, an iron-rich sphalerite variety, which can be associated with indium enrichment. This study presents a revision of the paragenetic scheme, a contribution to the indium behavior and potential, and discussion on the origin of the sulfur. This was achieved through reflected light microscopy, SEM-based MLA, EPMA, and bulk mineral sulfur isotope analysis on 37 sulfide-rich skarn samples from a mineral collection. The paragenetic scheme includes: a pre-mineralization stage of anhydrous calc-silicates and hydrous minerals; an oxide stage, dominated by magnetite; a sulfide stage of predominantly sphalerite, minor pyrite, chalcopyrite, arsenopyrite, and galena. Some sphalerite samples present elevated indium contents of up to 0.44 wt%. Elevated iron contents (4-10 wt%) in sphalerite can be tentatively linked to increased indium incorporation, but further analyses are required. Analyzed sulfides exhibit homogeneous delta S-34 values (-1 to +2 parts per thousand VCDT), assumed to be post-magmatic. They correlate with other Fe-Sn-Zn-Cu-In skarn deposits in the western Erzgebirge, and Permian vein-hosted associations throughout the Erzgebirge region.
Secondary mica minerals collected from the Santa Helena (W- (Cu) mineralization) and Venise (W-Mo mineralization) endogenic breccia structures were Ar-40/Ar-39 dated. The muscovite Ar-40/Ar-39 data yielded 286.8 +/- 1.2 (+/- 1 sigma) Ma (samples 6Ha and 11Ha) which reflect the age of secondary muscovite formation probably from magmatic biotite or feldspar alteration. Sericite Ar-40/Ar-39 data yielded 280.9 +/- 1.2 (+/- 1 sigma) Ma to 279.0 +/- 1.1 (+/- 1 sigma) Ma (samples 6Hb and 11Hb) reflecting the age of greisen alteration (T similar to 300 degrees C) where the W- disseminated mineralization occurs. The muscovite 40Ar/39Ar data of 277.3 +/- 1.3 (+/- 1 sigma) Ma and 281.3 +/- 1.2 (+/- 1 sigma) Ma (samples 5 and 6) also reflect the age of muscovite (selvage) crystallized adjacent to molybdenite veins within the Venise breccia. Geochronological data obtained confirmed that the W mineralization at Santa Helena breccia is older than Mo-mineralization at Venise breccia. Also, the timing of hydrothermal circulation and the cooling history for the W-stage deposition was no longer than 7 Ma and 4 Ma for Mo-deposition.
Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems
(2019)
In this paper, we consider the linear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered in the reproducing kernel Hilbert space framework to reconstruct the estimator from the random noisy data. We discuss the rates of convergence for the regularized solution under the prior assumptions and link condition. For regression functions with smoothness given in terms of source conditions the error bound can explicitly be established.
Head-Driven Phrase Structure Grammar is a constraint-based theory. It uses features and values to model linguistic objects. Values may be complex, e. g. consist of feature values pairs themselves. The paper shows that such feature value pairs together with identity of values and relations between feature values are sufficient to develop a complete linguistic theory including all linguistic levels of description. The paper explains the goals of researchers working in the framework and the way they deal with data and motivate their analyses. The framework is explained with respect to an example sentence that involves the following phenomena: valence, constituent structure, adjunction/modification, raising, case assignment, nonlocal dependencies, relative clauses.
Network Creation Games are a well-known approach for explaining and analyzing the structure, quality and dynamics of real-world networks like the Internet and other infrastructure networks which evolved via the interaction of selfish agents without a central authority. In these games selfish agents which correspond to nodes in a network strategically buy incident edges to improve their centrality. However, past research on these games has only considered the creation of networks with unit-weight edges. In practice, e.g. when constructing a fiber-optic network, the choice of which nodes to connect and also the induced price for a link crucially depends on the distance between the involved nodes and such settings can be modeled via edge-weighted graphs. We incorporate arbitrary edge weights by generalizing the well-known model by Fabrikant et al. [PODC'03] to edge-weighted host graphs and focus on the geometric setting where the weights are induced by the distances in some metric space. In stark contrast to the state-of-the-art for the unit-weight version, where the Price of Anarchy is conjectured to be constant and where resolving this is a major open problem, we prove a tight non-constant bound on the Price of Anarchy for the metric version and a slightly weaker upper bound for the non-metric case. Moreover, we analyze the existence of equilibria, the computational hardness and the game dynamics for several natural metrics. The model we propose can be seen as the game-theoretic analogue of a variant of the classical Network Design Problem. Thus, low-cost equilibria of our game correspond to decentralized and stable approximations of the optimum network design.
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models.
Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models
(2019)
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
We investigate how the technology acceptance and learning experience of the digital education platform HPI Schul-Cloud (HPI School Cloud) for German secondary school teachers can be improved by proposing a user-centered research and development framework. We highlight the importance of developing digital learning technologies in a user-centered way to take differences in the requirements of educators and students into account. We suggest applying qualitative and quantitative methods to build a solid understanding of a learning platform's users, their needs, requirements, and their context of use. After concept development and idea generation of features and areas of opportunity based on the user research, we emphasize on the application of a multi-attribute utility analysis decision-making framework to prioritize ideas rationally, taking results of user research into account. Afterward, we recommend applying the principle build-learn-iterate to build prototypes in different resolutions while learning from user tests and improving the selected opportunities. Last but not least, we propose an approach for continuous short- and long-term user experience controlling and monitoring, extending existing web- and learning analytics metrics.
Bridging the Gap
(2019)
The recent restructuring of the electricity grid (i.e., smart grid) introduces a number of challenges for today's large-scale computing systems. To operate reliable and efficient, computing systems must adhere not only to technical limits (i.e., thermal constraints) but they must also reduce operating costs, for example, by increasing their energy efficiency. Efforts to improve the energy efficiency, however, are often hampered by inflexible software components that hardly adapt to underlying hardware characteristics. In this paper, we propose an approach to bridge the gap between inflexible software and heterogeneous hardware architectures. Our proposal introduces adaptive software components that dynamically adapt to heterogeneous processing units (i.e., accelerators) during runtime to improve the energy efficiency of computing systems.
It is assumed that additionally to the family background and child characteris-tics, the children’s learning environments are crucial for the acquisition of early competencies. This study aimed to compare the eff ects of home and institutional learning environment on young children’s vocabulary and to test necessary con-ditions for a potential compensatory eff ect of the institutional learning environ-ment. Using longitudinal data from N = 557 preschool children (German National Educational Panel Study), we analysed to what extent family background and children’s characteristics predicted home and institutional learning environments and to what extent these learning environments predicted vocabulary in pre-school and primary school. In order to test if both learning environments pre-dict vocabulary separately, we used almost identical indicators to operationalize them. The effects were estimated within a structural equation model. The study revealed that both, home and institutional learning environment, had small and separate eff ects on children’s vocabulary. The home learning environment was more closely related to the family background, while the institutional learning en-vironment was more closely related to the children’s characteristics. This evokes new possibilities to discuss compensatory effect.
Monte-Carlo calculations are carried out to simulate the light transport in dense materials. Focus lies on the calculation of diffuse light transmission through films of scattering and absorbing media considering additionally the effect of dependent scattering. Different influences like interaction type between particles, particle size, composition etc. can be studied by this program. Simulations in this study show major influences on the diffuse transmission. Further simulations are carried out to model a sunscreen film and study best compositions of this film and will be presented.
Mise-Unseen
(2019)
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
Mixed methods approaches have become increasingly relevant in social sciences research over the last few decades. Nevertheless, we show that these approaches have rarely been explicitly applied in higher education research. This is somewhat surprising because mixed methods and empirical research into higher education seem to be a perfect match for several reasons: (1) the role of the researcher, which is associated with strong intersections between the research subject and the research object; (2) the research process, which relies on concepts and theories that are borrowed from other research fields; and (3) the research object, which exhibits unclear techniques in teaching and learning, making it difficult to grasp causalities between input and results. Mixed methods approaches provide a suitable methodology to research such topics. Beyond this, potential future developments underlining the particular relevance of mixed methods approaches in higher education are discussed.
SpringFit
(2019)
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forcing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a machine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call cantilever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
Stuck in the past?
(2019)
After the Civil War the Spanish army functioned as a guardian of domestic order, but suffered from antiquated material and little financial means. These factors have been described as fundamental reasons for the army’s low potential wartime capability. This article draws on British and German sources to demonstrate how Spanish military culture prevented an augmented effectiveness and organisational change. Claiming that the army merely lacked funding and modern equipment, falls considerably short in grasping the complexities of military effectiveness and organisational cultures, and might prove fatal for current attempts to develop foreign armed forces in conflict or post-conflict zones.
Based on the notion that time, space, and number are part of a generalized magnitude system, we assume that the dual-systems approach to temporal cognition also applies to numerical cognition. Referring to theoretical models of the development of numerical concepts, we propose that children's early skills in processing numbers can be described analogously to temporal updating and temporal reasoning.
Evaluating the performance of self-adaptive systems (SAS) is challenging due to their complexity and interaction with the often highly dynamic environment. In the context of self-healing systems (SHS), employing simulators has been shown to be the most dominant means for performance evaluation. Simulating a SHS also requires realistic fault injection scenarios. We study the state of the practice for evaluating the performance of SHS by means of a systematic literature review. We present the current practice and point out that a more thorough and careful treatment in evaluating the performance of SHS is required.
Spherical particles are routinely monitored and described by hydrodynamic diameters determined, e.g., by light scattering techniques. Non-spherical particles such as prolate ellipsoids require alternative techniques to characterize particle size as well as particle shape. In this study, oligo(epsilon-caprolactone) (oCL) based micronetwork (MN) particles with a shape-shifting function based on their shape-memory capability were programmed from spherical to prolate ellipsoidal shape aided by incorporation and stretching in a water-soluble phantom matrix. By applying light microscopy with automated contour detection and aspect ratio analysis, differences in characteristic aspect ratio distributions of non-crosslinked microparticles (MPs) and crosslinked MNs were detected when the degrees of phantom elongation (30-290%) are increased. The thermally induced shape recovery of programmed MNs starts in the body rather than from the tips of ellipsoids, which may be explained based on local differences in micronetwork deformation. By this approach, fascinating intermediate particle shapes with round bodies and two opposite sharp tips can be obtained, which could be of interest, e.g., in valves or other technical devices, in which the tips allow to temporarily encage the switchable particle in the desired position.
Persulfide groups participate in a wide array of biochemical pathways and are chemically very versatile. The TusA protein has been identified as a central element supplying and transferring sulfur as persulfide to a number of important biosynthetic pathways, like molybdenum cofactor biosynthesis or thiomodifications in nucleosides of tRNAs. In recent years, it has furthermore become obvious that this protein is indispensable for the oxidation of sulfur compounds in the cytoplasm. Phylogenetic analyses revealed that different TusA protein variants exists in certain organisms, that have evolved to pursue specific roles in cellular pathways. The specific TusA-like proteins thereby cannot replace each other in their specific roles and are rather specific to one sulfur transfer pathway or shared between two pathways. While certain bacteria like Escherichia coli contain several copies of TusA-like proteins, in other bacteria like Allochromatium vinosum a single copy of TusA is present with an essential role for this organism. Here, we give an overview on the multiple roles of the various TusA-like proteins in sulfur transfer pathways in different organisms to shed light on the remaining mysteries of this versatile protein.
The making of Tupaia’s map
(2019)
Tupaia’s Map is one of the most famous and enigmatic artefacts to emerge from the early encounters between Europeans and Pacific Islanders. It was drawn by Tupaia, an arioi priest, chiefly advisor and master navigator from Ra‘iātea in the Leeward Society Islands in collaboration with various members of the crew of James Cook’s Endeavour, in two distinct moments of mapmaking and three draft stages between August 1769 and February 1770. To this day, the identity of many islands on the chart, and the logic of their arrangement have posed a riddle to researchers. Drawing in part on archival material hitherto overlooked, in this long essay we propose a new understanding of the chart’s cartographic logic, offer a detailed reconstruction of its genesis, and thus for the first time present a comprehensive reading of Tupaia’s Map. The chart not only underscores the extent and mastery of Polynesian navigation, it is also a remarkable feat of translation between two very different wayfinding systems and their respective representational models.
Water fluxes in highly impounded regions are heavily dependent on reservoir properties. However, for large and remote areas, this information is often unavailable. In this study, the geometry and volume of small surface reservoirs in the semi-arid region of Brazil were estimated using terrain and shape attributes extracted by remote sensing. Regression models and data classification were used to predict the volumes, at different water stages, of 312 reservoirs for which topographic information is available. The power function used to describe the reservoir shapes tends to overestimate the volumes; therefore, a modified shape equation was proposed. Among the methods tested, four were recommended based on performance and simplicity, for which the mean absolute percentage errors varied from 24 to 39%, in contrast to the 94% error achieved with the traditional method. Despite the challenge of precisely deriving the flooded areas of reservoirs, water management in highly reservoir-dense environments should benefit from volume prediction based on remote sensing.
Based on data from the ESA Gaia Data Release 2 (DR2) and several ground-based, multi-band photometry surveys we have compiled an all-sky catalogue of 39 800 hot subluminous star candidates selected in Gaia DR2 by means of colour, absolute magnitude, and reduced proper motion cuts. We expect the majority of the candidates to be hot subdwarf stars of spectral type B and O, followed by blue horizontal branch stars of late B-type (HBB), hot post-AGB stars, and central stars of planetary nebulae. The contamination by cooler stars should be about 10%. The catalogue is magnitude limited to Gaia G < 19 mag and covers the whole sky. Except within the Galactic plane and LMC/SMC regions, we expect the catalogue to be almost complete up to about 1.5 kpc. The main purpose of this catalogue is to serve as input target list for the large-scale photometric and spectroscopic surveys which are ongoing or scheduled to start in the coming years. In the long run, securing a statistically significant sample of spectroscopically confirmed hot subluminous stars is key to advance towards a more detailed understanding of the latest stages of stellar evolution for single and binary stars.
This study is concerned with repair practices that a teacher and students employ to restore intersubjectivity when faced with interactional problems in a Content and Language Integrated Learning (CLIL) classroom. Adopting a conversation analytic (CA) approach, it examines the interactional treatment of students’ verbal and embodied trouble displays in a video-recorded, teacher-fronted geography lesson held in English at a German high school. At the same time, it explores to what extent the repair practices employed are fitted to this specific interactional context. The analysis shows that students’ verbal trouble displays often result in extensive repair sequences, whereas students’ embodied trouble displays are usually met with teacher self-repair in the transition space. In this way, the latter are resolved much earlier and more quickly. The study further reveals practices like reformulation and translation to be especially useful for repairing interactional problems in classrooms in which a foreign language is used as the medium of instruction. The findings may be of interest for prospective as well as practicing teachers in that they provide relevant insights into how interactional trouble can be successfully managed in (CLIL) classroom interaction.
Shrub encroachment in semi-arid savannas is induced by interacting effects of climate, fire suppression, and unsustainable livestock farming; it carries a severe risk of land degradation and strongly influences natural communities that provide key ecosystem functions. However, species-specific effects of shrub cover on many animal groups that act as indicators of degradation remain largely unknown. We analysed the consequences of shrub encroachment for ground-dwelling beetles in a semi-arid Namibian savanna rangeland, where beetles and vegetation were recorded along a shrub cover gradient (30%). Focusing on species niche breadths and optima, we identified two crucial shrub cover thresholds (2.9% and 10.0%), corresponding to major changes in the beetle communities with implications for savanna ecosystem functioning. Niche optima of most species were between the first and second thresholds; beyond the second threshold, saprophagous, coprophagous, and rare predatory beetles declined in numbers and diversity. This is problematic because beetles provide important ecosystem functions, such as decomposition and nutrient cycling. However, we also found that certain species were adapted to high shrub cover, thus providing examples of niche differentiation. Despite the predominantly negative effects of heavy shrub encroachment on beetle communities, shrubs in their early life stages apparently provide essential structures, which enhance habitat quality for ground-dwelling beetles. Our results demonstrate that shrub encroachment can have mixed effects on ground-dwelling beetle communities and hence on savanna ecosystem functioning. We, therefore, conclude that rangeland management and restoration should consider the complex trade-offs between species-specific effects and the level of encroachment for sustainable land use.
The removal, redistribution, and transient storage of sediments in tectonically active mountain belts is thought to exert a first-order control on shallow crustal stresses, fault activity, and hence on the spatiotemporal pattern of regional deformation processes. Accordingly, sediment loading and unloading cycles in intermontane sedimentary basins may inhibit or promote intrabasinal faulting, respectively, but unambiguous evidence for this potential link has been elusive so far. Here we combine 2D numerical experiments that simulate contractional deformation in a broken-foreland setting (i.e., a foreland where shortening is diachronously absorbed by spatially disparate, reverse faults uplifting basement blocks) with field data from intermontane basins in the NW Argentine Andes. Our modeling results suggest that thicker sedimentary fills (>0.7-1.0 km) may suppress basinal faulting processes, while thinner fills (<0.7 km) tend to delay faulting. Conversely, the removal of sedimentary loads via fluvial incision and basin excavation promotes renewed intrabasinal faulting. These results help to better understand the tectono-sedimentary history of intermontane basins that straddle the eastern border of the Andean Plateau in northwestern Argentina. For example, the Santa Maria and the Humahuaca basins record intrabasinal deformation during or after sediment unloading, while the Quebrada del Toro Basin reflects the suppression of intrabasinal faulting due to loading by coarse conglomerates. We conclude that sedimentary loading and unloading cycles may exert a fundamental control on spatiotemporal deformation patterns in intermontane basins of tectonically active broken forelands. (C) 2018 Elsevier B.V. All rights reserved.
The study of massive stars in different metallicity environments is a central topic of current stellar research. The spectral analysis of massive stars requires adequate model atmospheres. The computation of such models is difficult and time-consuming. Therefore, spectral analyses are greatly facilitated if they can refer to existing grids of models. Here we provide grids of model atmospheres for OB-type stars at metallicities corresponding to the Small and Large Magellanic Clouds, as well as to solar metallicity. In total, the grids comprise 785 individual models. The models were calculated using the state-of-the-art Potsdam Wolf-Rayet (PoWR) model atmosphere code. The parameter domain of the grids was set up using stellar evolution tracks. For all these models, we provide normalized and flux-calibrated spectra, spectral energy distributions, feedback parameters such as ionizing photons, Zanstra temperatures, and photometric magnitudes. The atmospheric structures (the density and temperature stratification) are available as well. All these data are publicly accessible through the PoWR website.
The blazar Mrk 501 (z = 0.034) was observed at very-high-energy (VHE, E greater than or similar to 100 GeV) gamma-ray wavelengths during a bright flare on the night of 2014 June 23-24 (MJD 56832) with the H.E.S.S. phase-II array of Cherenkov telescopes. Data taken that night by H.E.S.S. at large zenith angle reveal an exceptional number of gamma-ray photons at multi-TeV energies, with rapid flux variability and an energy coverage extending significantly up to 20 TeV. This data set is used to constrain Lorentz invariance violation (LIV) using two independent channels: a temporal approach considers the possibility of an energy dependence in the arrival time of gamma-rays, whereas a spectral approach considers the possibility of modifications to the interaction of VHE gamma-rays with extragalactic background light (EBL) photons. The non-detection of energy-dependent time delays and the non-observation of deviations between the measured spectrum and that of a supposed power-law intrinsic spectrum with standard EBL attenuation are used independently to derive strong constraints on the energy scale of LIV (E-QG) in the subluminal scenario for linear and quadratic perturbations in the dispersion relation of photons. For the case of linear perturbations, the 95% confidence level limits obtained are E-QG,E-1 > 3.6 x 10(17) GeV using the temporal approach and E-QG,E-1 > 2.6 x 10(19) GeV using the spectral approach. For the case of quadratic perturbations, the limits obtained are E-QG,E-2 > 8.5 x 10(10) GeV using the temporal approach and E-QG,E-2 > 7.8 x 10(11) GeV using the spectral approach.
The Galactic WC and WO stars
(2019)
Wolf-Rayet stars of the carbon sequence (WC stars) are an important cornerstone in the late evolution of massive stars before their core collapse. As core-helium burning, hydrogen-free objects with huge mass-loss, they are likely the last observable stage before collapse and thus promising progenitor candidates for type Ib/c supernovae. Their strong mass-loss furthermore provides challenges and constraints to the theory of radiatively driven winds. Thus, the determination of the WC star parameters is of major importance for several astrophysical fields. With Gaia DR2, for the first time parallaxes for a large sample of Galactic WC stars are available, removing major uncertainties inherent to earlier studies. In this work, we re-examine a previously studied sample of WC stars to derive key properties of the Galactic WC population. All quantities depending on the distance are updated, while the underlying spectral analyzes remain untouched. Contrasting earlier assumptions, our study yields that WC stars of the same subtype can significantly vary in absolute magnitude. With Gaia DR2, the picture of the Galactic WC population becomes more complex: We obtain luminosities ranging from log L/L-circle dot = 4.9-6.0 with one outlier (WR 119) having log L/L-circle dot = 4.7. This indicates that the WC stars are likely formed from a broader initial mass range than previously assumed. We obtain mass-loss rates ranging between log(M) over dot = -5.1 and -4.1, with (M) over dot proportional to L-0.68 and a linear scaling of the modified wind momentum with luminosity. We discuss the implications for stellar evolution, including unsolved issues regarding the need of envelope inflation to address the WR radius problem, and the open questions in regard to the connection of WR stars with Gamma-ray bursts. WC and WO stars are progenitors of massive black holes, collapsing either silently or in a supernova that most-likely has to be preceded by a WO stage.
Context. We present a detailed view of the pulsar wind nebula (PWN) HESS J1825-137. We aim to constrain the mechanisms dominating the particle transport within the nebula, accounting for its anomalously large size and spectral characteristics. Aims. The nebula was studied using a deep exposure from over 12 years of H.E.S.S. I operation, together with data from H.E.S.S. II that improve the low-energy sensitivity. Enhanced energy-dependent morphological and spatially resolved spectral analyses probe the very high energy (VHE, E > 0.1 TeV) gamma-ray properties of the nebula. Methods. The nebula emission is revealed to extend out to 1.5 degrees from the pulsar, similar to 1.5 times farther than previously seen, making HESS J1825-137, with an intrinsic diameter of similar to 100 pc, potentially the largest gamma-ray PWN currently known. Characterising the strongly energy-dependent morphology of the nebula enables us to constrain the particle transport mechanisms. A dependence of the nebula extent with energy of R proportional to E alpha with alpha = -0.29 +/- 0.04(stat) +/- 0.05(sys) disfavours a pure diffusion scenario for particle transport within the nebula. The total gamma-ray flux of the nebula above 1 TeV is found to be (1.12 +/- 0.03(stat) +/- 0.25(sys)) +/- 10(-11) cm(-2) s(-1), corresponding to similar to 64% of the flux of the Crab nebula. Results. HESS J1825-137 is a PWN with clearly energy-dependent morphology at VHE gamma-ray energies. This source is used as a laboratory to investigate particle transport within intermediate-age PWNe. Based on deep observations of this highly spatially extended PWN, we produce a spectral map of the region that provides insights into the spectral variation within the nebula.
Groundwater travel time distributions (TTDs) provide a robust description of the subsurface mixing behavior and hydrological response of a subsurface system. Lagrangian particle tracking is often used to derive the groundwater TTDs. The reliability of this approach is subjected to the uncertainty of external forcings, internal hydraulic properties, and the interplay between them. Here, we evaluate the uncertainty of catchment groundwater TTDs in an agricultural catchment using a 3-D groundwater model with an overall focus on revealing the relationship between external forcing, internal hydraulic properties, and TTD predictions. Eight recharge realizations are sampled from a high-resolution dataset of land surface fluxes and states. Calibration-constrained hydraulic conductivity fields (Ks fields) are stochastically generated using the null-space Monte Carlo (NSMC) method for each recharge realization. The random walk particle tracking (RWPT) method is used to track the pathways of particles and compute travel times. Moreover, an analytical model under the random sampling (RS) assumption is fit against the numerical solutions, serving as a reference for the mixing behavior of the model domain. The StorAge Selection (SAS) function is used to interpret the results in terms of quantifying the systematic preference for discharging young/old water. The simulation results reveal the primary effect of recharge on the predicted mean travel time (MTT). The different realizations of calibration-constrained Ks fields moderately magnify or attenuate the predicted MTTs. The analytical model does not properly replicate the numerical solution, and it underestimates the mean travel time. Simulated SAS functions indicate an overall preference for young water for all realizations. The spatial pattern of recharge controls the shape and breadth of simulated TTDs and SAS functions by changing the spatial distribution of particles' pathways. In conclusion, overlooking the spatial nonuniformity and uncertainty of input (forcing) will result in biased travel time predictions. We also highlight the worth of reliable observations in reducing predictive uncertainty and the good interpretability of SAS functions in terms of understanding catchment transport processes.
We study the requirement of the jet power in the conventional p-gamma models (photopion production and Bethe-Heitler pair production) for TeV BL Lac objects. We select a sample of TeV BL Lac objects whose spectral energy distributions are difficult to explain by the one-zone leptonic model. Based on the relation between the p-gamma interaction efficiency and the opacity of gamma gamma absorption, we find that the detection of TeV emission poses upper limits on the p-gamma interaction efficiencies in these sources and hence minimum jet powers can be derived accordingly. We find that the obtained minimum jet powers exceed the Eddington luminosity of the supermassive black holes (SMBHs). Implications for the accretion mode of the SMBHs in these BL Lac objects and the origin of their TeV emissions are discussed.
Permafrost warming has the potential to amplify global climate change, because when frozen sediments thaw it unlocks soil organic carbon. Yet to date, no globally consistent assessment of permafrost temperature change has been compiled. Here we use a global data set of permafrost temperature time series from the Global Terrestrial Network for Permafrost to evaluate temperature change across permafrost regions for the period since the International Polar Year (2007-2009). During the reference decade between 2007 and 2016, ground temperature near the depth of zero annual amplitude in the continuous permafrost zone increased by 0.39 +/- 0.15 degrees C. Over the same period, discontinuous permafrost warmed by 0.20 +/- 0.10 degrees C. Permafrost in mountains warmed by 0.19 +/- 0.05 degrees C and in Antarctica by 0.37 +/- 0.10 degrees C. Globally, permafrost temperature increased by 0.29 +/- 0.12 degrees C. The observed trend follows the Arctic amplification of air temperature increase in the Northern Hemisphere. In the discontinuous zone, however, ground warming occurred due to increased snow thickness while air temperature remained statistically unchanged.
Various studies have implied the existence of a gaseous halo around the Galaxy extending out to similar to 100 kpc. Galactic cosmic rays (CRs) that propagate to the halo, either by diffusion or by convection with the possibly existing large-scale Galactic wind, can interact with the gas therein and produce gamma-rays via proton-proton collision. We calculate the CR distribution in the halo and the gamma-ray flux, and explore the dependence of the result on model parameters such as diffusion coefficient, CR luminosity, and CR spectral index. We find that the current measurement of isotropic gamma-ray background (IGRB) at less than or similar to TeV with the Fermi Large Area Telescope already approaches a level that can provide interesting constraints on the properties of Galactic CR (e.g., with CR luminosity L-CR <= 1041 erg s(-1)). We also discuss the possibilities of the Fermi bubble and IceCube neutrinos originating from the proton-proton collision between CRs and gas in the halo, as well as the implication of our results for the baryon budget of the hot circumgalactic medium of our Galaxy. Given that the isotropic gamma-ray background is likely to be dominated by unresolved extragalactic sources, future telescopes may extract more individual sources from the IGRB, and hence put even more stringent restrictions on the relevant quantities (such as Galactic CR luminosity and baryon budget in the halo) in the presence of a turbulent halo that we consider.
Many solar wind observations at 1 au indicate that the proton (as well as electron) temperature anisotropy is limited. The data distribution in the (A(a), beta(a),(parallel to))-plane have a rhombic-shaped form around beta(a),(parallel to) similar to 1. The boundaries of the temperature anisotropy at beta(a),(parallel to) > 1 can be well explained by the threshold conditions of the mirror (whistler) and oblique proton (electron) firehose instabilities in a bi-Maxwellian plasma, whereas the physical mechanism of the similar restriction at beta(a),(parallel to) < 1 is still under debate. One possible option is Coulomb collisions, which we revisit in the current work. We derive the relaxation rate nu(A)(aa) of the temperature anisotropy in a bi-Maxwellian plasma that we then study analytically and by observed proton data from WIND. We found that nu(A)(pp) increases toward small beta(p),(parallel to) < 1. We matched the data distribution in the (A(p), beta(p),(parallel to))-plane with the constant contour nu(A)(pp) = 2.8 . 10(-6) s(-1), corresponding to the minimum value for collisions to play a role. This contour fits rather well the left boundary of the rhombic-shaped data distribution in the (A(p), beta(p),(parallel to))-plane. Thus, Coulomb collisions are an interesting candidate for explaining the limitations of the temperature anisotropy in the solar wind with small beta(a),(parallel to) < 1 at 1 au.
Estimating parameters from multiple time series of population dynamics using bayesian inference
(2019)
Empirical time series of interacting entities, e.g., species abundances, are highly useful to study ecological mechanisms. Mathematical models are valuable tools to further elucidate those mechanisms and underlying processes. However, obtaining an agreement between model predictions and experimental observations remains a demanding task. As models always abstract from reality one parameter often summarizes several properties. Parameter measurements are performed in additional experiments independent of the ones delivering the time series. Transferring these parameter values to different settings may result in incorrect parametrizations. On top of that, the properties of organisms and thus the respective parameter values may vary considerably. These issues limit the use of a priori model parametrizations. In this study, we present a method suited for a direct estimation of model parameters and their variability from experimental time series data. We combine numerical simulations of a continuous-time dynamical population model with Bayesian inference, using a hierarchical framework that allows for variability of individual parameters. The method is applied to a comprehensive set of time series from a laboratory predator-prey system that features both steady states and cyclic population dynamics. Our model predictions are able to reproduce both steady states and cyclic dynamics of the data. Additionally to the direct estimates of the parameter values, the Bayesian approach also provides their uncertainties. We found that fitting cyclic population dynamics, which contain more information on the process rates than steady states, yields more precise parameter estimates. We detected significant variability among parameters of different time series and identified the variation in the maximum growth rate of the prey as a source for the transition from steady states to cyclic dynamics. By lending more flexibility to the model, our approach facilitates parametrizations and shows more easily which patterns in time series can be explained also by simple models. Applying Bayesian inference and dynamical population models in conjunction may help to quantify the profound variability in organismal properties in nature.
Being at the western fringe of Europe, Iberia had a peculiar prehistory and a complex pattern of Neolithization. A few studies, all based on modern populations, reported the presence of DNA of likely African origin in this region, generally concluding it was the result of recent gene flow, probably during the Islamic period. Here, we provide evidence of much older gene flow from Africa to Iberia by sequencing whole genomes from four human remains from northern Portugal and southern Spain dated around 4000 years BP (from the Middle Neolithic to the Bronze Age). We found one of them to carry an unequivocal sub-Saharan mitogenome of most probably West or West-Central African origin, to our knowledge never reported before in prehistoric remains outside Africa. Our analyses of ancient nuclear genomes show small but significant levels of sub-Saharan African affinity in several ancient Iberian samples, which indicates that what we detected was not an occasional individual phenomenon, but an admixture event recognizable at the population level. We interpret this result as evidence of an early migration process from Africa into the Iberian Peninsula through a western route, possibly across the Strait of Gibraltar.
Hantavirus assembly and budding are governed by the surface glycoproteins Gn and Gc. In this study, we investigated the glycoproteins of Puumala, the most abundant Hantavirus species in Europe, using fluorescently labeled wild-type constructs and cytoplasmic tail (CT) mutants. We analyzed their intracellular distribution, co-localization and oligomerization, applying comprehensive live, single-cell fluorescence techniques, including confocal microscopy, imaging flow cytometry, anisotropy imaging and Number&Brightness analysis. We demonstrate that Gc is significantly enriched in the Golgi apparatus in absence of other viral components, while Gn is mainly restricted to the endoplasmic reticulum (ER). Importantly, upon co-expression both glycoproteins were found in the Golgi apparatus. Furthermore, we show that an intact CT of Gc is necessary for efficient Golgi localization, while the CT of Gn influences protein stability. Finally, we found that Gn assembles into higher-order homo-oligomers, mainly dimers and tetramers, in the ER while Gc was present as mixture of monomers and dimers within the Golgi apparatus. Our findings suggest that PUUV Gc is the driving factor of the targeting of Gc and Gn to the Golgi region, while Gn possesses a significantly stronger self-association potential.
For the layered transition metal dichalcogenide 1T-TaS2, we establish through a unique experimental approach and density functional theory, how ultrafast charge transfer in 1T-TaS2 takes on isotropic three-dimensional character or anisotropic two-dimensional character, depending on the commensurability of the charge density wave phases of 1T-TaS2. The X-ray spectroscopic core-hole-clock method prepares selectively in-and out-of-plane polarized sulfur 3p orbital occupation with respect to the 1T-TaS2 planes and monitors sub-femtosecond wave packet delocalization. Despite being a prototypical two-dimensional material, isotropic three-dimensional charge transfer is found in the commensurate charge density wave phase (CCDW), indicating strong coupling between layers. In contrast, anisotropic two-dimensional charge transfer occurs for the nearly commensurate phase (NCDW). In direct comparison, theory shows that interlayer interaction in the CCDW phase - not layer stacking variations - causes isotropic three-dimensional charge transfer. This is presumably a general mechanism for phase transitions and tailored properties of dichalcogenides with charge density waves.
Electromagnetic ion cyclotron waves have long been recognized to play a crucial role in the dynamic loss of ring current protons. While the field-aligned propagation approximation of electromagnetic ion cyclotron waves was widely used to quantify the scattering loss of ring current protons, in this study, we find that the wave normal distribution strongly affects the pitch angle scattering efficiency of protons. Increase of peak normal angle or angular width can considerably reduce the scattering rates of <= 10 keV protons. For >10 keV protons, the field-aligned propagation approximation results in a pronounced underestimate of the scattering of intermediate equatorial pitch angle protons and overestimates the scattering of high equatorial pitch angle protons by orders of magnitude. Our results suggest that the wave normal distribution of electromagnetic ion cyclotron waves plays an important role in the pitch angle evolution and scattering loss of ring current protons and should be incorporated in future global modeling of ring current dynamics.
In active mountain belts with steep terrain, bedrock landsliding is a major erosional agent. In the Himalayas, landsliding is driven by annual hydro-meteorological forcing due to the summer monsoon and by rarer, exceptional events, such as earthquakes. Independent methods yield erosion rate estimates that appear to increase with sampling time, suggesting that rare, high-magnitude erosion events dominate the erosional budget. Nevertheless, until now, neither the contribution of monsoon and earthquakes to landslide erosion nor the proportion of erosion due to rare, giant landslides have been quantified in the Himalayas. We address these challenges by combining and analysing earthquake- and monsoon-induced landslide inventories across different timescales. With time series of 5 m satellite images over four main valleys in central Nepal, we comprehensively mapped landslides caused by the monsoon from 2010 to 2018. We found no clear correlation between monsoon properties and landsliding and a similar mean landsliding rate for all valleys, except in 2015, where the valleys affected by the earthquake featured similar to 5-8 times more landsliding than the pre-earthquake mean rate. The longterm size-frequency distribution of monsoon-induced landsliding (MIL) was derived from these inventories and from an inventory of landslides larger than similar to 0.1 km(2) that occurred between 1972 and 2014. Using a published landslide inventory for the Gorkha 2015 earthquake, we derive the size-frequency distribution for earthquakeinduced landsliding (EQIL). These two distributions are dominated by infrequent, large and giant landslides but under-predict an estimated Holocene frequency of giant landslides (> 1 km(3)) which we derived from a literature compilation. This discrepancy can be resolved when modelling the effect of a full distribution of earthquakes of variable magnitude and when considering that a shallower earthquake may cause larger landslides. In this case, EQIL and MIL contribute about equally to a total long-term erosion of similar to 2 +/- 0.75 mm yr(-1) in agreement with most thermo-chronological data. Independently of the specific total and relative erosion rates, the heavy-tailed size-frequency distribution from MIL and EQIL and the very large maximal landslide size in the Himalayas indicate that mean landslide erosion rates increase with sampling time, as has been observed for independent erosion estimates. Further, we find that the sampling timescale required to adequately capture the frequency of the largest landslides, which is necessary for deriving long-term mean erosion rates, is often much longer than the averaging time of cosmogenic Be-10 methods. This observation presents a strong caveat when interpreting spatial or temporal variability in erosion rates from this method. Thus, in areas where a very large, rare landslide contributes heavily to long-term erosion (as the Himalayas), we recommend Be-10 sample in catchments with source areas > 10 000 km(2) to reduce the method mean bias to below similar to 20 % of the long-term erosion.
Electrospray ionization-ion mobility spectrometry was employed for the determination of collision cross sections (CCS) of 25 synthetically produced peptides in the mass range between 540-3310 Da. The experimental measurement of the CCS is complemented by their calculation applying two different methods. One prediction method is the intrinsic size parameter (ISP) method developed by the Clemmer group. The second new method is based on the evaluation of molecular dynamics (MD) simulation trajectories as a whole, resulting in a single, averaged collision cross-section value for a given peptide in the gas phase. A high temperature MD simulation is run in order to scan through the whole conformational space. The lower temperature conformational distribution is obtained through thermodynamic reweighting. In the first part, various correlations, e.g. CCS vs. mass and inverse mobility vs. m/z correlations, are presented. Differences in CCS between peptides are also discussed in terms of their respective mass and m/z differences, as well as their respective structures. In the second part, measured and calculated CCS are compared. The agreement between the prediction results and the experimental values is in the same range for both calculation methods. While the calculation effort of the ISP method is much lower, the MD method comprises several tools providing deeper insights into the conformations of peptides. Advantages and limitations of both methods are discussed. Based on the separation of two pairs of linear and cyclic peptides of virtually the same mass, the influence of the structure on the cross sections is discussed. The shift in cross section differences and peak shape after transition from the linear to the cyclic peptide can be well understood by applying different MD tools, e.g. the root-mean-square deviation (RMSD) and the root mean square fluctuation (RMSF). (C) 2018 Elsevier B.V. All rights reserved.
Radiation therapy is a basic part of cancer treatment. To increase the DNA damage in carcinogenic cells and preserve healthy tissue at the same time, radiosensitizing molecules such as halogenated nucleobase analogs can be incorporated into the DNA during the cell reproduction cycle. In the present study 8.44 eV photon irradiation induced single strand breaks (SSB) in DNA sequences modified with the radiosensitizer 5-bromouracil (U-5Br) and 8-bromoadenine ((8Br)A) are investigated. U-5Br was incorporated in the 13mer oligonucleotide flanked by different nucleobases. It was demonstrated that the highest SSB cross sections were reached, when cytosine and thymine were adjacent to U-5Br, whereas guanine as a neighboring nucleobase decreases the activity of U-5Br indicating that competing reaction mechanisms are active. This was further investigated with respect to the distance of guanine to U-5Br separated by an increasing number of adenine nucleotides. It was observed that the SSB cross sections were decreasing with an increasing number of adenine spacers between guanine and U-5Br until the SSB cross sections almost reached the level of a non-modified DNA sequence, which demonstrates the high sequence dependence of the sensitizing effect of U-5Br. (8Br)A was incorporated in a 13mer oligonucleotide as well and the strand breaks were quantified upon 8.44 eV photon irradiation in direct comparison to a non-modified DNA sequence of the same composition. No clear enhancement of the SSB yield of the modified in comparison to the non-modified DNA sequence could be observed. Additionally, secondary electrons with a maximum energy of 3.6 eV were generated when using Si as a substrate giving rise to further DNA damage. A clear enhancement in the SSB yield can be ascertained, but to the same degree for both the non-modified DNA sequence and the DNA sequence modified with (8Br)A.
Arctic lowlands are characterized by large numbers of small waterbodies, which are known to affect surface energy budgets and the global carbon cycle. Statistical analysis of their size distributions has been hindered by the shortage of observations at sufficiently high spatial resolutions. This situation has now changed with the high-resolution (<5 m) circum-Arctic Permafrost Region Pond and Lake (PeRL) database recently becoming available. We have used this database to make the first consistent, high-resolution estimation of Arctic waterbody size distributions, with surface areas ranging from 0.0001 km(2) (100 m(2)) to 1 km(2). We found that the size distributions varied greatly across the thirty study regions investigated and that there was no single universal size distribution function (including power-law distribution functions) appropriate across all of the study regions. We did, however, find close relationships between the statistical moments (mean, variance, and skewness) of the waterbody size distributions from different study regions. Specifically, we found that the spatial variance increased linearly with mean waterbody size (R-2 = 0.97, p < 2.2e-16) and that the skewness decreased approximately hyperbolically. We have demonstrated that these relationships (1) hold across the 30 Arctic study regions covering a variety of (bio)climatic and permafrost zones, (2) hold over time in two of these study regions for which multi-decadal satellite imagery is available, and (3) can be reproduced by simulating rising water levels in a high-resolution digital elevation model. The consistent spatial and temporal relationships between the statistical moments of the waterbody size distributions underscore the dominance of topographic controls in lowland permafrost areas. These results provide motivation for further analyses of the factors involved in waterbody development and spatial distribution and for investigations into the possibility of using statistical moments to predict future hydrologic dynamics in the Arctic.
Shear-waves are the most energetic body-waves radiated from an earthquake, and are responsible for the destruction of engineered structures. In both short-term emergency response and long-term risk forecasting of disaster-resilient built environment, it is critical to predict spatially accurate distribution of shear-wave amplitudes. Although decades’ old theory proposes a deterministic, highly anisotropic, four-lobed shear-wave radiation pattern, from lack of convincing evidence, most empirical ground-shaking prediction models settled for an oversimplified stochastic radiation pattern that is isotropic on average. Today, using the large datasets of uniformly processed seismograms from several strike, normal, reverse, and oblique-slip earthquakes across the globe, compiled specifically for engineering applications, we could reveal, quantify, and calibrate the frequency-, distance-, and style-of-faulting dependent transition of shear-wave radiation between a stochastic-isotropic and a deterministic-anisotropic phenomenon. Consequent recalibration of empirical ground-shaking models dramatically improved their predictions: with isodistant anisotropic variations of ±40%, and 8% reduction in uncertainty. The outcomes presented here can potentially trigger a reappraisal of several practical issues in engineering seismology, particularly in seismic ground-shaking studies and seismic hazard and risk assessment.
The particle-in-cell (PIC) method was developed to investigate microscopic phenomena, and with the advances in computing power, newly developed codes have been used for several fields, such as astrophysical, magnetospheric, and solar plasmas. PIC applications have grown extensively, with large computing powers available on supercomputers such as Pleiades and Blue Waters in the US. For astrophysical plasma research, PIC methods have been utilized for several topics, such as reconnection, pulsar dynamics, non-relativistic shocks, relativistic shocks, and relativistic jets. PIC simulations of relativistic jets have been reviewed with emphasis placed on the physics involved in the simulations. This review summarizes PIC simulations, starting with the Weibel instability in slab models of jets, and then focuses on global jet evolution in helical magnetic field geometry. In particular, we address kinetic Kelvin-Helmholtz instabilities and mushroom instabilities.
A molecular dynamics study was done to reveal the adsorption properties of sodium dioctyl sulfosuccinate (AOT) bilayers on gold Au(111) surfaces. Examining the rotational mobility of AOT molecules, we track that the correlation time of AOT molecules on the adsorbed layer is much higher. The data estimating the diffusive motion of AOT molecule show a substantially lower rate of diffusion (similar to 10(-10) cm(2)/s) in the adsorbed layers in comparison to other ones. The results show that an adsorbed layer is more rigid, whereas the outer layers undergo considerable lateral and vertical fluctuations.
Time-dependent processes are often analyzed using the power spectral density (PSD) calculated by taking an appropriate Fourier transform of individual trajectories and finding the associated ensemble average. Frequently, the available experimental datasets are too small for such ensemble averages, and hence, it is of a great conceptual and practical importance to understand to which extent relevant information can be gained from S(f, T), the PSD of a single trajectory. Here we focus on the behavior of this random, realization-dependent variable parametrized by frequency f and observation time T, for a broad family of anomalous diffusions-fractional Brownian motion with Hurst index H-and derive exactly its probability density function. We show that S(f, T) is proportional-up to a random numerical factor whose universal distribution we determine-to the ensemble-averaged PSD. For subdiffusion (H < 1/2), we find that S(f, T) similar to A/f(2H+1) with random amplitude A. In sharp contrast, for superdiffusion (H > 1/2) S(f, T) similar to BT2H-1/f(2) with random amplitude B. Remarkably, for H > 1/2 the PSD exhibits the same frequency dependence as Brownian motion, a deceptive property that may lead to false conclusions when interpreting experimental data. Notably, for H > 1/2 the PSD is ageing and is dependent on T. Our predictions for both sub-and superdiffusion are confirmed by experiments in live cells and in agarose hydrogels and by extensive simulations.
Gene flow is an important factor determining the evolution of a species, since it directly affects population structure and species’ adaptation. Here, we investigated population structure, population history, and migration among populations covering the entire distribution of the geographically isolated South-West European common lizard (Zootoca vivipara louislantzi) using 34 newly developed polymorphic microsatellite markers. The analyses unravelled the presence of isolation by distance, inbreeding, recent bottlenecks, genetic differentiation, and low levels of migration among most populations, suggesting that Z. vivipara louislantzi is threatened. The results point to discontinuous populations and are in line with physical barriers hindering longitudinal migration south to the central Pyrenean cordillera and latitudinal migration in the central Pyrenees. In contrast, evidence for longitudinal migration exists from the lowlands north to the central Pyrenean cordillera and the Cantabrian Mountains. The locations of the populations south to the central Pyrenean cordillera were identified as the first to be affected by global warming; thus, management actions aimed at avoiding population declines should start in this area.
We report on Cosmic Origin Spectrograph observations of the gamma-ray bright blazar B2 1215+30, collected in 2015 November. These observations allow for the confirmation of the source redshift from the detection of a Lyα emission feature at λ ~ 1374 Å. The emission feature places the source at a redshift of z = 0.1305 ± 0.003, confirming the source's ground-based spectral measurement. The gamma-ray emission of the source is discussed in the context of the source distance, required for the accurate reconstruction of the intrinsic gamma-ray emission taking the absorption by the extragalactic background light into account. The source distance is found to be low enough that the previously reported detection of an exceptional flaring event from B2 1215+30 in 2014 cannot be used to investigate opacity-specific spectral and variability characteristics introduced by possible ultra-high-energy cosmic-ray propagation.
Spectral broadening in hollow-core fibers is an important tool for pulse compression of low-peak power laser pulses, especially for Yb-based lasers. Here, we present a pulse compression scheme to reduce the pulse duration of a commercial Yb:KGW laser operating at 100 kHz repetition rate and 40 mu J pulse energy from 390 to 38 fs. The spectral broadening is accomplished using a krypton-filled Kagome-type fiber. We report broadened spectra for variable Kr-pressures and input powers. At optimal settings of 8 bar Kr-pressure and 3.3 W input power, the bandwidth of the pulse at the -10 dB level increased from 9.5 to 85 nm corresponding to a Fourier limit of 26 fs. A simple SF10 prism compressor is used to reduce the accumulated chirp and shortens the fiber output from about 500 to 38 fs. In addition to the spectral broadening, a pressure dependent change of the polarization is observed.
Inducible defences against predation are widespread in the natural world, allowing prey to economise on the costs of defence when predation risk varies over time or is spatially structured. Through interspecific interactions, inducible defences have major impacts on ecological dynamics, particularly predator-prey stability and phase lag. Researchers have developed multiple distinct approaches, each reflecting assumptions appropriate for particular ecological communities. Yet, the impact of inducible defences on ecological dynamics can be highly sensitive to the modelling approach used, making the choice of model a critical decision that affects interpretation of the dynamical consequences of inducible defences. Here, we review three existing approaches to modelling inducible defences: Switching Function, Fitness Gradient and Optimal Trait. We assess when and how the dynamical outcomes of these approaches differ from each other, from classic predator-prey dynamics and from commonly observed eco-evolutionary dynamics with evolving, but non-inducible, prey defences. We point out that the Switching Function models tend to stabilise population dynamics, and the Fitness Gradient models should be carefully used, as the difference with evolutionary dynamics is important. We discuss advantages of each approach for applications to ecological systems with particular features, with the goal of providing guidelines for future researchers to build on.
We summarize the current state of observations of circumplanetary dust populations, including both dilute and dense rings and tori around the giant planets, ejecta clouds engulfing airless moons, and rings around smaller planetary bodies throughout the Solar System. We also discuss the theoretical models that enable these observations to be understood in terms of the sources, sinks and transport of various dust populations. The dynamics and resulting transport of the particles can be quite complex, due to the fact that their motion is influenced by neutral and plasma drag, radiation pressure, and electromagnetic forcesall in addition to gravity. The relative importance of these forces depends on the environment, as well as the makeup and size of the particles. Possible dust sources include the generation of ejecta particles by impacts, active volcanoes and geysers, and the capture of exogenous particles. Possible dust sinks include collisions with moons, rings, or the central planet, erosion due to sublimation and sputtering, even ejection and escape from the circumplanetary environment.
It is now consensus that engaging in innovative work behaviors is not restricted to traditional innovation jobs (e.g., research and development), but that they can be performed on a discretionary basis in most of today’s jobs. To date, our knowledge on the role of workplace stressors for discretionary innovative behavior, in particular for innovation implementation, is limited. We draw on a cybernetic view as well as on a transactional, coping-based perspective with stress to propose differential effects of stressors on innovation implementation. We propose that work demands have a positive effect on innovation implementation, whereas role-based stressors (i.e., role conflict, role ambiguity, and professional compromise) have a negative effect. We conducted a time-lagged, survey-based study in the health care sector (Study 1, United Kingdom: N = 235 nurses). Innovation implementation was measured 2 years after the assessment of the stressors. Supporting our hypotheses, work demands were positively related to subsequent innovation implementation, whereas role ambiguity and professional compromise were negatively related to subsequent innovation implementation. We also tested organizational commitment as a mediator, but there was only partial support for the mediation. To test the generalizability of the findings, we replicated the study (Study 2, Germany: employees from various professions, N = 138, time lag 2 weeks). Similar results to that in Study 1 were obtained. There was no support for strain as a mediator. Our results suggest differential effects of work demands and role stressors on innovation implementation, for which the underlying mechanism still needs to be uncovered.
Adjustment of median ground motion prediction equations (GMPEs) from one region to another region is one of the major challenges within the current practice of seismic hazard analysis. In our approach of generating response spectra, we derive two separate empirical models for a) Fourier amplitude spectrum (FAS) and b) duration of ground motion. To calculate response spectra, the two models are combined within the random vibration theory (RVT) framework. The models are calibrated on recordings obtained from shallow crustal earthquakes in active tectonic regions. We use a subset of NGA-West2 database with M3.2-7.9 earthquakes at distances 0-300 km. The NGA-West2 database expanded over a wide magnitude and distance range facilitates a better constraint over derived models. A frequency-dependent duration model is derived to obtain adjustable response spectral ordinates. Excellent comparison of our approach with other NGA-West2 models implies that it can also be used as a stand-alone model.
Quantifying erosion rates, and how they compare to rock uplift rates, is fundamental for understanding landscape response to tectonics and associated sediment fluxes from upland areas. The erosional response to uplift is well-represented by river incision and the associated landslide activity. However, characterising the relationship between these processes remains a major challenge in tectonically active areas, in some cases because landslides can preclude obtaining reliable erosion rates from cosmogenic radionuclide (CRN) concentrations. Here, we quantify the control of tectonics and its coupled geomorphic response on the erosion rates of catchments in southern Italy that are experiencing a transient response to normal faulting. We analyse in-situ Be-10 concentrations for detrital sediment samples, collected along the strike of faults with excellent tectonic constraints and landslide inventories. We demonstrate that Be-10-derived erosion rates are controlled by fault throw rates and the extent of transient incision and associated landsliding in the catchments. We show that the low-relief sub-catchments above knickpoints erode at uniform background rates of similar to 0.10 mm/yr, while downstream of knickpoints, erosion removes similar to 50% of the rock uplifted by the faults, at rates of 0.10-0.64 mm/yr. Despite widespread landsliding, CRN samples provide relatively consistent and accurate erosion rates, most likely because landslides are frequent, small, and shallow, and represent the integrated record of landsliding over several seismic cycles. Consequently, we combine these validated Be-10 erosion rates and data from a geomorphological landslide inventory in a published numerical model, to gain further insight into the long-term landslide rates and sediment mixing, highlighting the potential of CRN data to study landslide dynamics. (C) 2018 Elsevier B.V. All rights reserved.
The incorporation of even small amounts of strontium (Sr) into lead-base hybrid quadruple cation perovskite solar cells results in a systematic increase of the open circuit voltage (V-oc) in pin-type perovskite solar cells. We demonstrate via absolute and transient photoluminescence (PL) experiments how the incorporation of Sr significantly reduces the non-radiative recombination losses in the neat perovskite layer. We show that Sr segregates at the perovskite surface, where it induces important changes of morphology and energetics. Notably, the Sr-enriched surface exhibits a wider band gap and a more n-type character, accompanied with significantly stronger surface band bending. As a result, we observe a significant increase of the quasi-Fermi level splitting in the neat perovskite by reduced surface recombination and more importantly, a strong reduction of losses attributed to non-radiative recombination at the interface to the C-60 electron-transporting layer. The resulting solar cells exhibited a V-oc of 1.18 V, which could be further improved to nearly 1.23 V through addition of a thin polymer interlayer, reducing the non-radiative voltage loss to only 110 meV. Our work shows that simply adding a small amount of Sr to the precursor solutions induces a beneficial surface modification in the perovskite, without requiring any post treatment, resulting in high efficiency solar cells with power conversion efficiency (PCE) up to 20.3%. Our results demonstrate very high V-oc values and efficiencies in Sr-containing quadruple cation perovskite pin-type solar cells and highlight the imperative importance of addressing and minimizing the recombination losses at the interface between perovskite and charge transporting layer.
Shape memory is the capability of a material to be deformed and fixed into a temporary shape. Recovery of the original shape can then be triggered only by an external stimulus. Shape-memory polymers are highly deformable materials that can be programmed to recover a memorized shape in response to a variety of environmental and spatially localized stimuli as a one-way effect. The shape-memory function can also be generated as a reversible effect enabling actuation behaviour through macroscale deformation and processing, specifically by dictating the macromolecular orientation of actuation units and of the skeleton structure of geometry-determining units in the polymers. Shape-memory polymers can be programmed and reprogrammed into arbitrary shapes. Both recovery and actuation behaviour are reprogrammable. In this Review, we outline the common basis and key differences between the two shape-memory behaviours of polymers in terms of mechanism, fabrication schemes and characterization methods. We discuss which combination of macromolecular architecture and macroscale processing is necessary for coordinated, decentralized and responsive physical behaviour. The extraction of relevant thermomechanical information is described, and design criteria are shown for microscale and macroscale morphologies to gain high levels of recovered or actuation strains as well as on-demand 2D-to-3D shape transformations. Finally, real-world applications and key future challenges are highlighted.
Whereas many cognitive tasks show pronounced aging effects, even in healthy older adults, other tasks seem more resilient to aging. A small number of recent studies suggests that number comparison is possibly one of the abilities that remain unaltered across the life span. We investigated the ability to compare single-digit numbers in young (19-39 years; n = 39) and healthy older (65-79 years; n = 39) adults in considerable detail, analyzing accuracy as well as mean and variance of their response time, together with several other well-established hallmarks of numerical comparison. Using a recent comprehensive process model that parsimoniously accounts quantitatively for many aspects of number comparison (Reike & Schwarz, 2016), we address two fundamental problems in the comparison of older to young adults in numerical comparison tasks: (a) to adequately correct speed measures for different levels of accuracy (older participants were significantly more accurate than young participants), and (b) to distinguish between general sensory and motor slowing on the one hand, as opposed to a specific age-related decline in the efficiency to retrieve and compare numerical magnitude representations. Our results represent strong evidence that healthy older adults compare magnitudes as efficiently as young adults, when the measure of efficiency is uncontaminated by strategic speed-accuracy trade-offs and by sensory and motor stages that are not related to numerical comparison per se. At the same time, older adults aim at a significantly higher accuracy level (risk aversion), which necessarily prolongs processing time, and they also show the well-documented general decline in sensory and/or motor functions.
There is an increasing need for an assessment of the impacts of land use and land use change (LUCC). In this context, simulation models are valuable tools for investigating the impacts of stakeholder actions or policy decisions. Agricultural landscape generators (ALGs), which systematically and automatically generate realistic but simplified representations of land cover in agricultural landscapes, can provide the input for LUCC models. We reviewed existing ALGs in terms of their objectives, design and scope. We found eight ALGs that met our definition. They were based either on generic mathematical algorithms (pattern-based) or on representations of ecological or land use processes (process-based). Most ALGs integrate only a few landscape metrics, which limits the design of the landscape pattern and thus the range of applications. For example, only a few specific farming systems have been implemented. We conclude that existing ALGs contain useful approaches that can be used for specific purposes, but ideally generic modular ALGs are developed that can be used for a wide range of scenarios, regions and model types. We have compiled features of such generic ALGs and propose a possible software architecture. Considerable joint efforts are required to develop such generic ALGs, but the benefits in terms of a better understanding and development of more efficient agricultural policies would be high.
Aim There is an increasing evidence showing that species within various taxonomic groups have reticulate evolutionary histories with several cases of introgression events. Investigating the phylogeography of species complexes can provide insight into these introgressions, and when and where these hybridizations occurred. In this study, we investigate the biogeography of a widely distributed Western Palaearctic bat species complex, namely Myotis nattereri sensu lato. This complex exhibits high genetic diversity and in its western distribution range is composed of deeply diverged genetical lineages. However, little is known about the genetic structure of the eastern populations. We also infer the conservation and taxonomical implications of the identified genetic divergences. Taxon Myotis nattereri sensu lato including M. schaubi. Location Western Palaearctic. Methods We analysed 161 specimens collected from 67 locations and sequenced one mitochondrial and four nuclear DNA markers, and combined these with the available GenBank sequences. We used haplotype networks, PCA, t-SNE and Bayesian clustering algorithms to investigate the population structure and Bayesian trees to infer the phylogenetic relationship of the lineages. Results We identified deeply divergent genetical lineages. In some cases, nuclear and mitochondrial markers were discordant, which we interpret are caused by hybridization between lineages. We identified three such introgression events. These introgressions occurred when spatially separated lineages came into contact after range expansions. Based on the genetic distinction of the identified lineages, we suggest a revision in the taxonomy of this species group with two possible new species: M. hoveli and M. tschuliensis. Main conclusions Our findings suggest that the M. nattereri complex has a reticulate evolutionary history with multiple cases of hybridizations between some of the identified lineages.
We review our current knowledge of comet 67P/Churyumov–Gerasimenko nucleus composition as inferred from measurements made by remote sensing and in-situ instruments aboard Rosetta orbiter and Philae lander. Spectrophotometric properties (albedos, color indexes and Hapke parameters) of 67P/CG derived by Rosetta are discussed in the context of other comets previously explored by space missions. Composed of an assemblage made of ices, organic materials and minerals, cometary nuclei exhibit very dark and red surfaces which can be described by means of spectrophotometric quantities and reproduced with laboratory measurements. The presence of surface water and carbon dioxide ices was found by Rosetta to occur at localized sites where the activity driven by solar input, gaseous condensation or exposure of pristine inner layers can maintain these species on the surface. Apart from these specific areas, 67P/CG’s surface appears remarkably uniform in composition with a predominance of organic materials and minerals. The organic compounds contain abundant hydroxyl group and a refractory macromolecular material bearing aliphatic and aromatic hydrocarbons. The mineral components are compatible with a mixture of silicates and fine-grained opaques, including Fe-sulfides, like troilite and pyrrhotite, and ammoniated salts. In the vicinity of the perihelion several active phenomena, including the erosion of surface layers, the localized activity in cliffs, fractures and pits, the collapse of overhangs and walls, the transfer and redeposition of dust, cause the evolution of the different regions of the nucleus by inducing color, composition and texture changes.
Exposure to peer aggression is a major risk factor for the development of aggressive behavior in childhood and adolescence. Furthermore, peer aggression has the propensity to spread and affect individuals who were not exposed to the original source of aggression. The aim of this paper is to demonstrate that peer aggression is in many regards similar to a contagious disease. By presenting a program of research based on longitudinal and multilevel studies, we provide evidence for the contagious quality of aggressive behavior, show that individuals vary in their susceptibility to peer aggression, and describe group‐level characteristics that moderate the influence of peer aggression. We discuss mechanisms that may explain how individuals catch aggressive behavior from their peers and how the effects on the development of individuals' aggressive behavior unfold over time. Further, we examine processes that may increase the risk of being exposed to peers' aggressive behavior. We conclude with discussing implications for future studies on the contagious nature of peer aggression.
Vagueness
(2019)
Though vague phenomena have been studied extensively for many decades, it is only in recent years that researchers sought the support of quantitative data. This chapter highlights and discusses the insights that experimental methods brought to the study of vagueness. One area focused on are ‘borderline contradictions’, that is, sentences like ‘She is neither tall nor not tall’ that are contradictory when analysed in classical logic, but are actually acceptable as descriptions of borderline cases. The flourishing of theories and experimental studies that borderline contradictions have led to are examined closely. Beyond this illustrative case, an overview of recent studies that concern the classification of types of vagueness, the use of numbers, rounding, number modification, and the general pragmatic status of vagueness is provided.
Several hydraulic fracturing tests were performed in boreholes located in central Hungary in order to determine the in-situ stress for a geological site investigation. At a depth of about 540m, the observed pressure versus time curves in mica schist with low dip angle foliation shows atypical pressure versus time results. After each pressurization cycle, the fracture breakdown pressure in the first fracturing cycle is lower than the refracturing or reopening pressure in the subsequent pressurizations. It is assumed that the viscosity of the drilling mud and observed foliation of the mica schist have a significant influence on the pressure values. In order to study this problem, numerical modeling was performed using the distinct element code particle flow code, which has been proven to be a valuable tool to investigate rock engineering problems such as hydraulic fracturing. The two-dimensional version of the code applied in this study can simulate hydro-mechanically coupled fluid flow in crystalline rock with low porosity and pre-existing fractures. In this study, the effect of foliation angle and fluid viscosity on the peak pressure is tested. The atypical characteristics of the pressure behaviour are interpreted so that mud with higher viscosity penetrates the sub-horizontal foliation plane, blocks the plane of weakness and makes the partly opened fracture tight and increase the pore pressure which decreases slowly with time. We see this viscous blocking effect as one explanation for the observed increase in fracture reopening pressure in subsequent pressurization cycles.
Excited-state proton transfer (ESPT) is a fundamental process in biomolecular photochemistry, but its underlying mediators often evade direct observation. We identify a distinct pathway for ESPT in aqueous 2-thiopyridone, by employing transient N1s X-ray absorption spectroscopy and multi-configurational spectrum simulations. Photoexcitations to the singlet S-2 and S-4 states both relax promptly through intersystem crossing to the triplet T-1 state. The T-1 state, through its rapid population and near nanosecond lifetime, mediates nitrogen site deprotonation by ESPT in a secondary intersystem crossing to the S-0 potential energy surface. This conclusively establishes a dominant ESPT pathway for the system in aqueous solution, which is also compatible with previous measurements in acetonitrile. Thereby, the hitherto open questions of the pathway for ESPT in the compound, including its possible dependence on excitation wavelength and choice of solvent, are resolved.
For a finite measure space X, we characterize strongly continuous Markov lattice semigroups on Lp(X) by showing that their generator A acts as a derivation on the dense subspace D(A)L(X). We then use this to characterize Koopman semigroups on Lp(X) if X is a standard probability space. In addition, we show that every measurable and measure-preserving flow on a standard probability space is isomorphic to a continuous flow on a compact Borel probability space.
The occurrence of mounds dominated by siliceous sponges and microbialites is often related to distal, deep settings of middle ramps and shelves. This paper presents evidence for Bajocian (Garanliana garantiana Zone) microbial-siliceous sponge mounds formed in open marine but relatively shallow settings of a ramp from the Iberian Basin of eastern Spain. Marked differences in mound spacing, morphology, and composition of the related intermound facies are observed from distal to more proximal settings. The distal (below storm wave base) settings are characterized by alternating tabular-bedded marls and limestones rich in pelagic fossils (ammonites, belemnites), open-marine thin-shelled bivalves (Bositra-like), as well as peloids, which include widely or randomly spaced isolated, small (up to 0.4 m high) and larger (up to 2.5 m high) mounds with upward accretion. The intermediate (near to above storm wave base) settings show tabular, thickened beds of peloidal and/or intraclastic limestones with closely spaced mounds (similar to 1 m high), which often coalesce laterally, forming extensive lenticular structures (up to 10 m wide). The proximal (above storm wave base) depositional settings consist of tabular to irregular beds of intraclastic limestones with widely spaced small (up to 0.4 m high) mounds with mainly tabular geometries. The mound framework contains variable proportions of microbialites (dense to clotted peloidal thrombolitic fabrics) and siliceous sponges (hexactinellids and lithistids in similar proportion) ranging from planar to conic shapes. These morphological and compositional changes allow characterizing three shallowing-upward sequences (sequences 1-3) developed in the overall regressive trend of a basin-wide, upper Bajocian T-R cycle. Episodic wave reworking of the early-cemented mounds resulted in the formation of peloids, small rounded intraclasts, and large, rounded or subangular intraclasts. These nonskeletal micritic grains show internal fabrics related to those of the mound and/or microbialites. A progressive textural gradation towards greater size and lesser roundness of the nonskeletal grains in the areas in the vicinity of the main mound factory is documented (i.e., from large, subangular intraclasts in the areas close to the main mound factory to peloids in the areas that are far from it). We discuss the alternative model of internal waves (instead of storm-induced waves) as the hydrodynamic agent providing the high-energy events needed to explain the origin of the peloidal-intraclastic intermound facies and, likely, also the nutrients needed by the microbialites and siliceous sponges to grow.
Small-aperture array as a tool to monitor fluid injection- and extraction-induced microseismicity
(2019)
The monitoring of microseismicity during temporary human activities such as fluid injections for hydrofracturing, hydrothermal stimulations or wastewater disposal is a difficult task. The seismic stations often cannot be installed on hard rock, and at quiet places, noise is strongly increased during the operation itself and the installation of sensors in deep wells is costly and often not feasible. The combination of small-aperture seismic arrays with shallow borehole sensors offers a solution. We tested this monitoring approach at two different sites: (1) accompanying a fracking experiment in sedimentary shale at 4km depth and (2) above a gas field under depletion. The small-aperture arrays were planned according to theoretical wavenumber studies combined with simulations considering the local noise conditions. We compared array recordings with recordings available from shallow borehole sensors and give examples of detection and location performance. Although the high-frequency noise on the 50-m-deep borehole sensors was smaller compared to the surface noise before the injection experiment, the signals were highly contaminated during injection by the pumping activities. Therefore, a set of three small-aperture arrays at different azimuths was more suited to detect small events, since noise recorded on these arrays is uncorrelated with each other. Further, we developed recommendations for the adaptation of the monitoring concept to other sites experiencing induced seismicity.
Highly efficient data-collection methods are required for successful macromolecular crystallography (MX) experiments at X-ray free-electron lasers (XFELs). XFEL beamtime is scarce, and the high peak brightness of each XFEL pulse destroys the exposed crystal volume. It is therefore necessary to combine diffraction images from a large number of crystals (hundreds to hundreds of thousands) to obtain a final data set, bringing about sample-refreshment challenges that have previously been unknown to the MX synchrotron community. In view of this experimental complexity, a number of sample delivery methods have emerged, each with specific requirements, drawbacks and advantages. To provide useful selection criteria for future experiments, this review summarizes the currently available sample delivery methods, emphasising the basic principles and the specific sample requirements. Two main approaches to sample delivery are first covered: (i) injector methods with liquid or viscous media and (ii) fixed-target methods using large crystals or using microcrystals inside multi-crystal holders or chips. Additionally, hybrid methods such as acoustic droplet ejection and crystal extraction are covered, which combine the advantages of both fixed-target and injector approaches.
Hasidic Myth-Activism
(2019)
Since the 1970s, Buber has often been suspected of being a Volkish thinker. This essay reconsiders the affinity of Buber’s late writings with Volkish ideology. It examines the allegations against Buber’s Volkish thought in light of his later biblical and Hasidic writings. By illuminating the ideological affinity between these two modes of thought, the essay explains how Buber aims to depart from the dangers of myth without rejecting myth as such. I argue that Buber’s relationship to myth can help us to explain his critique of nationalism. My basic argument is that in his struggle with hyper-nationalism, Buber follows the Baal Shem Tov and his struggle against Sabbateanism. Like the Besht, Buber does not reject myth, but seeks instead to repair it from within. Whereas hyper-nationalism uses myth to advance its political goals, Buber seeks to reposition ethics within a mythic framework. I view Buber’s exegesis and commentaries on biblical and Hasidic myths as myth-activism.
Subsurface residual stresses (RS) were investigated in Ti-6Al-4V cuboid samples by means of X-ray synchrotron diffraction. The samples were manufactured by laser powder bed fusion (LPBF) applying different processing parameters, not commonly considered in open literature, in order to assess their influence on RS state. While investigating the effect of process parameters used for the calculation of volumetric energy density (such as laser velocity, laser power and hatch distance), we observed that an increase of energy density led to a decrease of RS, although not to the same extent for every parameter variation. Additionally, the effect of support structure, sample roughness and LPBF machine effects potentially coming from Ar flow were studied. We observed no influence of support structure on subsurface RS while the orientation with respect to Ar flow showed to have an impact on RS. We conclude recommending monitoring such parameters to improve part reliability and reproducibility.
The protein fractions of cocoa have been implicated influencing both the bioactive potential and sensory properties of cocoa and cocoa products. The objective of the present review is to show the impact of different stages of cultivation and processing with regard to the changes induced in the protein fractions. Special focus has been laid on the major seed storage proteins throughout the different stages of processing. The study starts with classical introduction of the extraction and the characterization methods used, while addressing classification approaches of cocoa proteins evolved during the timeline. The changes in protein composition during ripening and maturation of cocoa seeds, together with the possible modifications during the post-harvest processing (fermentation, drying, and roasting), have been documented. Finally, the bioactive potential arising directly or indirectly from cocoa proteins has been elucidated. The state of the art suggests that exploration of other potentially bioactive components in cocoa needs to be undertaken, while considering the complexity of reaction products occurring during the roasting phase of the post-harvest processing. Finally, the utilization of partially processed cocoa beans (e.g., fermented, conciliatory thermal treatment) can be recommended, providing a large reservoir of bioactive potentials arising from the protein components that could be instrumented in functionalizing foods.
Flood disasters severely impact human subjective well-being (SWB). Nevertheless, few studies have examined the influence of flood events on individual well-being and how such impacts may be limited by flood protection measures. This study estimates the long term impacts on individual subjective well-being of flood experiences, individual subjective flood risk perceptions, and household flood preparedness decisions. These effects are monetised and placed in context through a comparison with impacts of other adverse events on well-being. We collected data from households in flood-prone areas in France. The results indicate that experiencing a flood has a large negative impact on subjective well-being that is incompletely attenuated over time. Moreover, individuals do not need to be directly affected by floods to suffer SWB losses since subjective well-being is lower for those who expect their flood risk to increase or who have seen a neighbour being flooded. Floodplain inhabitants who prepared for flooding by elevating their home have a higher subjective well-being. A monetisation of the aforementioned well-being impacts shows that a flood requires Euro150,000 in immediate compensation to attenuate SWB losses. The decomposition of the monetised impacts of flood experience into tangible losses and intangible effects on SWB shows that intangible effects are about twice as large as the tangible direct monetary flood losses. Investments in flood protection infrastructure may be under funded if the intangible SWB benefits of flood protection are not taken into account.
Meta-communities of habitat islands may be essential to maintain biodiversity in anthropogenic landscapes allowing rescue effects in local habitat patches. To understand the species-assembly mechanisms and dynamics of such ecosystems, it is important to test how local plant-community diversity and composition is affected by spatial isolation and hence by dispersal limitation and local environmental conditions acting as filters for local species sorting.We used a system of 46 small wetlands (kettle holes)natural small-scale freshwater habitats rarely considered in nature conservation policiesembedded in an intensively managed agricultural matrix in northern Germany. We compared two types of kettle holes with distinct topographies (flat-sloped, ephemeral, frequently plowed kettle holes vs. steep-sloped, more permanent ones) and determined 254 vascular plant species within these ecosystems, as well as plant functional traits and nearest neighbor distances to other kettle holes.Differences in alpha and beta diversity between steep permanent compared with ephemeral flat kettle holes were mainly explained by species sorting and niche processes and mass effect processes in ephemeral flat kettle holes. The plant-community composition as well as the community trait distribution in terms of life span, breeding system, dispersal ability, and longevity of seed banks significantly differed between the two habitat types. Flat ephemeral kettle holes held a higher percentage of non-perennial plants with a more persistent seed bank, less obligate outbreeders and more species with seed dispersal abilities via animal vectors compared with steep-sloped, more permanent kettle holes that had a higher percentage of wind-dispersed species. In the flat kettle holes, plant-species richness was negatively correlated with the degree of isolation, whereas no such pattern was found for the permanent kettle holes.Synthesis: Environment acts as filter shaping plant diversity (alpha and beta) and plant-community trait distribution between steep permanent compared with ephemeral flat kettle holes supporting species sorting and niche mechanisms as expected, but we identified a mass effect in ephemeral kettle holes only. Flat ephemeral kettle holes can be regarded as meta-ecosystems that strongly depend on seed dispersal and recruitment from a seed bank, whereas neighboring permanent kettle holes have a more stable local species diversity.
Nitrate or ammonium
(2019)
In freshwaters, algal species are exposed to different inorganic nitrogen (Ni) sources whose incorporation varies in biochemical energy demand. We hypothesized that due to the lesser energy requirement of ammonium (NH4+)-use, in contrast to nitrate (NO3-)-use, more energy remains for other metabolic processes, especially under CO2-and phosphorus (Pi) limiting conditions. Therefore, we tested differences in cell characteristics of the green alga Chlamydomonas acidophila grown on NH4+ or NO3- under covariation of CO2 and Pi-supply in order to determine limitations, in a full-factorial design. As expected, results revealed higher carbon fixation rates for NH4+ grown cells compared to growth with NO3- under low CO2 conditions. NO3- -grown cells accumulated more of the nine analyzed amino acids, especially under Pi-limited conditions, compared to cells provided with NH4+. This is probably due to a slower protein synthesis in cells provided with NO3-. In contrast to our expectations, compared to NH4+ -grown cells NO3- -grown cells had higher photosynthetic efficiency under Pi-limitation. In conclusion, growth on the Ni-source NH4+ did not result in a clearly enhanced Ci-assimilation, as it was highly dependent on Pi and CO2 conditions (replete or limited). Results are potentially connected to the fact that C. acidophila is able to use only CO2 as its inorganic carbon (Ci) source.
We report two experiments and Bayesian modelling of the data collected. In both experiments, participants performed a long-lag primed picture naming task. Black-and-white line drawings were used as targets, which were overtly named by the participants. Their naming latencies were measured. In both experiments, primes consisted of past participle verbs (er tanzt/er hat getanzt "he dances/he has danced") and the relationship between primes and targets was either morphological or unrelated. Experiment 1 additionally had phonologically and semantically related prime-target pairs as well as present tense primes. Both in Experiment 1 and 2, participants showed significantly faster naming latencies for morphologically related targets relative to the unrelated verb primes. In Experiment 1, no priming effects were observed in phonologically and semantically related control conditions. In addition, the production latencies were not influenced by verb type.
The northward indentation of the Pamir salient into the Tarim basin at the western syntaxis of the India-Asia collision zone is the focus of controversial models linking lithospheric to surface and atmospheric processes. Here we report on tectonic events recorded in the most complete and best-dated sedimentary sequences from the western Tarim basin flanking the eastern Pamir (the Aertashi section), based on sedimentologic, provenance, and magnetostratigraphic analyses. Increased tectonic subsidence and a shift from marine to continental fluvio-deltaic deposition at 41Ma indicate that far-field deformation from the south started to affect the Tarim region. A sediment accumulation hiatus from 24.3 to 21.6Ma followed by deposition of proximal conglomerates is linked to fault propagation into the Tarim basin. From 21.6 to 15.0Ma, increasing accumulation rates of fining upward clastics is interpreted as the expression of a major dextral transtensional system linking the Kunlun to the Tian Shan ahead of the northward Pamir indentation. At 15.0Ma, the appearance of North Pamir-sourced conglomerates followed at 11Ma by Central Pamir-sourced volcanics coincides with a shift to E-W compression, clockwise vertical-axis rotations and the onset of growth strata associated with the activation of the local east vergent Qimugen thrust wedge. Together, this enables us to interpret that Pamir indentation into Tarim had started by 24.3Ma, reached the study location by 15.0Ma and had passed it by 11Ma, providing kinematic constraints on proposed tectonic models involving intracontinental subduction and delamination.
Background: Epidemiological studies suggest that an increased red meat intake is associated with a higher risk of type 2 diabetes, whereas an increased fiber intake is associated with a lower risk. Objectives: We conducted an intervention study to investigate the effects of these nutritional factors on glucose and lipid metabolism, body-fat distribution, and liver fat content in subjects at increased risk of type 2 diabetes. Methods: This prospective, randomized, and controlled dietary intervention study was performed over 6 mo. All groups decreased their daily caloric intake by 400 kcal. The "control" group (N = 40) only had this requirement. The "no red meat" group (N = 48) in addition aimed to avoid the intake of red meat, and the "fiber" group (N = 44) increased intake of fibers to 40 g/d. Anthropometric parameters and frequently sampled oral glucose tolerance tests were performed before and after intervention. Body-fat mass and distribution, liver fat, and liver iron content were assessed by MRI and single voxel proton magnetic resonance spectroscopy. Results: Participants in all groups lost weight (mean 3.3 +/- 0.5 kg, P < 0.0001). Glucose tolerance and insulin sensitivity improved (P < 0.001), and body and visceral fat mass decreased in all groups (P < 0.001). These changes did not differ between groups. Liver fat content decreased significantly (P < 0.001) with no differences between the groups. The decrease in liver fat correlated with the decrease in ferritin during intervention (r(2) = 0.08, P = 0.0021). This association was confirmed in an independent lifestyle intervention study (Tuebingen Lifestyle Intervention Program, N = 229, P = 0.0084). Conclusions: Our data indicate that caloric restriction leads to a marked improvement in glucose metabolism and body-fat composition, including liver-fat content. The marked reduction in liver fat might be mediated via changes in ferritin levels. In the context of caloric restriction, there seems to be no additional beneficial impact of reduced red meat intake and increased fiber intake on the improvement in cardiometabolic risk parameters. This trial was registered at clinicaltrials.gov as NCT03231839.
The improvement of process representations in hydrological models is often only driven by the modelers' knowledge and data availability. We present a comprehensive comparison between two hydrological models of different complexity that is developed to support (1) the understanding of the differences between model structures and (2) the identification of the observations needed for model assessment and improvement. The comparison is conducted on both space and time and by aggregating the outputs at different spatiotemporal scales. In the present study, mHM, a process‐based hydrological model, and ParFlow‐CLM, an integrated subsurface‐surface hydrological model, are used. The models are applied in a mesoscale catchment in Germany. Both models agree in the simulated river discharge at the outlet and the surface soil moisture dynamics, lending their supports for some model applications (drought monitoring). Different model sensitivities are, however, found when comparing evapotranspiration and soil moisture at different soil depths. The analysis supports the need of observations within the catchment for model assessment, but it indicates that different strategies should be considered for the different variables. Evapotranspiration measurements are needed at daily resolution across several locations, while highly resolved spatially distributed observations with lower temporal frequency are required for soil moisture. Finally, the results show the impact of the shallow groundwater system simulated by ParFlow‐CLM and the need to account for the related soil moisture redistribution. Our comparison strategy can be applied to other models types and environmental conditions to strengthen the dialog between modelers and experimentalists for improving process representations in Earth system models.
The Value of Empirical Data for Estimating the Parameters of a Sociohydrological Flood Risk Model
(2019)
In this paper, empirical data are used to estimate the parameters of a sociohydrological flood risk model. The proposed model, which describes the interactions between floods, settlement density, awareness, preparedness, and flood loss, is based on the literature. Data for the case study of Dresden, Germany, over a period of 200years, are used to estimate the model parameters through Bayesian inference. The credibility bounds of their estimates are small, even though the data are rather uncertain. A sensitivity analysis is performed to examine the value of the different data sources in estimating the model parameters. In general, the estimated parameters are less biased when using data at the end of the modeled period. Data about flood awareness are the most important to correctly estimate the parameters of this model and to correctly model the system dynamics. Using more data for other variables cannot compensate for the absence of awareness data. More generally, the absence of data mostly affects the estimation of the parameters that are directly related to the variable for which data are missing. This paper demonstrates that combining sociohydrological modeling and empirical data gives additional insights into the sociohydrological system, such as quantifying the forgetfulness of the society, which would otherwise not be easily achieved by sociohydrological models without data or by standard statistical analysis of empirical data.
Chorus waves play an important role in the dynamic evolution of energetic electrons in the Earth's radiation belts and ring current. Using more than 5 years of Van Allen Probe data, we developed a new analytical model for upper‐band chorus (UBC; 0.5fce < f < fce) and lower‐band chorus (LBC; 0.05fce < f < 0.5fce) waves, where fce is the equatorial electron gyrofrequency. By applying polynomial fits to chorus wave root mean square amplitudes, we developed regression models for LBC and UBC as a function of geomagnetic activity (Kp), L, magnetic latitude (λ), and magnetic local time (MLT). Dependence on Kp is separated from the dependence on λ, L, and MLT as Kp‐scaling law to simplify the calculation of diffusion coefficients and inclusion into particle tracing codes. Frequency models for UBC and LBC are also developed, which depends on MLT and magnetic latitude. This empirical model is valid in all MLTs, magnetic latitude up to 20°, Kp ≤ 6, L‐shell range from 3.5 to 6 for LBC and from 4 to 6 for UBC. The dependence of root mean square amplitudes on L are different for different bands, which implies different energy sources for different wave bands. This analytical chorus wave model is convenient for inclusion in quasi‐linear diffusion calculations of electron scattering rates and particle simulations in the inner magnetosphere, especially for the newly developed four‐dimensional codes, which require significantly improved wave parameterizations.
The public encounter
(2019)
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
The evolution of the radiation belts in L-shell (L), energy (E), and equatorial pitch angle (alpha(0)) is analyzed during the calm 11-day interval (4-15 March) following the 1 March 2013 storm. Magnetic Electron and Ion Spectrometer (MagEIS) observations from Van Allen Probes are interpreted alongside 1D and 3D Fokker-Planck simulations combined with consistent event-driven scattering modeling from whistler mode hiss waves. Three (L, E, alpha(0)) regions persist through 11 days of hiss wave scattering; the pitch angle-dependent inner belt core (L similar to <2.2 and E < 700 keV), pitch angle homogeneous outer belt low-energy core (L > similar to 5 and E similar to < 100 keV), and a distinct pocket of electrons (L similar to [4.5, 5.5] and E similar to [0.7, 2] MeV). The pitch angle homogeneous outer belt is explained by the diffusion coefficients that are roughly constant for alpha(0) similar to <60 degrees, E > 100 keV, 3.5 < L < L-pp similar to 6. Thus, observed unidirectional flux decays can be used to estimate local pitch angle diffusion rates in that region. Top-hat distributions are computed and observed at L similar to 3-3.5 and E = 100-300 keV.
Ring current electrons (1–100 keV) have received significant attention in recent decades, but many questions regarding their major transport and loss mechanisms remain open. In this study, we use the four‐dimensional Versatile Electron Radiation Belt code to model the enhancement of phase space density that occurred during the 17 March 2013 storm. Our model includes global convection, radial diffusion, and scattering into the Earth's atmosphere driven by whistler‐mode hiss and chorus waves. We study the sensitivity of the model to the boundary conditions, global electric field, the electric field associated with subauroral polarization streams, electron loss rates, and radial diffusion coefficients. The results of the code are almost insensitive to the model parameters above 4.5 RERE, which indicates that the general dynamics of the electrons between 4.5 RE and the geostationary orbit can be explained by global convection. We found that the major discrepancies between the model and data can stem from the inaccurate electric field model and uncertainties in lifetimes. We show that additional mechanisms that are responsible for radial transport are required to explain the dynamics of ≥40‐keV electrons, and the inclusion of the radial diffusion rates that are typically assumed in radiation belt studies leads to a better agreement with the data. The overall effect of subauroral polarization streams on the electron phase space density profiles seems to be smaller than the uncertainties in other input parameters. This study is an initial step toward understanding the dynamics of these particles inside the geostationary orbit.
Terrestrial gravimetry is increasingly used to monitor mass transport processes in geophysics boosted by the ongoing technological development of instruments. Resolving a particular phenomenon of interest, however, requires a set of gravity corrections of which the uncertainties have not been addressed up to now. In this study, we quantify the time domain uncertainty of tide, global atmospheric, large-scale hydrological, and nontidal ocean loading corrections. The uncertainty is assessed by comparing the majority of available global models for a suite of sites worldwide. The average uncertainty expressed as root-mean-square error equals 5.1nm/s(2), discounting local hydrology or air pressure. The correction-induced uncertainty of gravity changes over various time periods of interest ranges from 0.6nm/s(2) for hours up to a maximum of 6.7nm/s(2) for 6months. The corrections are shown to be significant and should be applied for most geophysical applications of terrestrial gravimetry. From a statistical point of view, however, resolving subtle gravity effects in the order of few nanometers per square second is challenged by the uncertainty of the corrections. Plain Language Summary Many scientists are exploring ways to benefit from gravity measurements in fields of high societal relevance such as monitoring of volcanoes or measuring the amount of water in underground. Any application of such new methods, however, requires careful preparation of the gravity measurements. The intention of the preparation process is to ensure that the measurements do not contain information about processes that are not of interest. For that reason, the influence of atmosphere, ocean, tides, and hydrology needs to be reduced from the gravity. In this study, we investigate how this reduction process influences the quality of the measurement. We found that the precision degrades especially owing to the hydrology. The ocean plays an important role at sites close to the coast and the atmosphere at sites located in mountains. The overall errors of the reductions may complicate a reliable use of gravity measurements in certain studies focusing on very small signals. Nevertheless, the precision of gravity reductions alone does not obstruct a meaningful use of gravity measurements in most research fields. Details specifying the reduction precision are provided in this study allowing scientist dealing with gravity measurements to decide if their signal of interest can be reliably resolved.
We derive a set of regional ground-motion prediction equations (GMPEs) in the Fourier amplitude spectra (FAS-GMPE) and in the spectral acceleration (SA-GMPE) domains for the purpose of interpreting the between-event residuals in terms of source parameter variability. We analyze a dataset of about 65,000 recordings generated by 1400 earthquakes (moment magnitude 2: 5 <= M-w <= 6: 5, hypocentral distance R-hypo <= 150 km) that occurred in central Italy between January 2008 and October 2017. In a companion article (Bindi, Spallarossa, et al., 2018), the nonparametric acceleration source spectra were interpreted in terms of omega-square models modified to account for deviations from a high-frequency flat plateau through a parameter named k(source). Here, the GMPEs are derived considering the moment (M-w), the local (M-L), and the energy (M-e) magnitude scales, and the between-event residuals are computed as random effects. We show that the between-event residuals for the FAS-GMPE implementing M-w are correlated with stress drop, with correlation coefficients increasing with increasing frequency up to about 10 Hz. Contrariwise, the correlation is weak for the FAS-GMPEs implementing M-L and M-e, in particular between 2 and 5 Hz, where most of the corner frequencies lie. At higher frequencies, all models show a strong correlation with k(source). The correlation with the source parameters reflects in a different behavior of the standard deviation tau of the between-event residuals with frequency. Although tau is smaller for the FAS-GMPE using M-w below 1.5 Hz, at higher frequencies, the model implementing either M-L or M-e shows smaller values, with a reduction of about 30% at 3 Hz (i.e., from 0.3 for M-w to 0.1 for M-L). We conclude that considering magnitude scales informative for the stress-drop variability allows to reduce the between-event variability with a significant impact on the hazard assessment, in particular for studies in which the ergodic assumption on site is removed.
Mosses in Low Earth Orbit
(2019)
As a part of the European Space Agency mission "EXPOSE-R2" on the International Space Station (ISS), the BIOMEX (Biology and Mars Experiment) experiment investigates the habitability of Mars and the limits of life. In preparation for the mission, experimental verification tests and scientific verification tests simulating different combinations of abiotic space- and Mars-like conditions were performed to analyze the resistance of a range of model organisms. The simulated abiotic space- and Mars-stressors were extreme temperatures, vacuum, and Mars-like surface ultraviolet (UV) irradiation in different atmospheres. We present for the first time simulated space exposure data of mosses using plantlets of the bryophyte genus Grimmia, which is adapted to high altitudinal extreme abiotic conditions at the Swiss Alps. Our preflight tests showed that severe UVR200-400nm irradiation with the maximal dose of 5 and 6.8 x 10(5) kJ center dot m(-2), respectively, was the only stressor with a negative impact on the vitality with a 37% (terrestrial atmosphere) or 36% reduction (space- and Mars-like atmospheres) in photosynthetic activity. With every exposure to UVR200-400nm 10(5) kJ center dot m(-2), the vitality of the bryophytes dropped by 6%. No effect was found, however, by any other stressor. As the mosses were still vital after doses of ultraviolet radiation (UVR) expected during the EXPOSE-R2 mission on ISS, we show that this earliest extant lineage of land plants is highly resistant to extreme abiotic conditions.
Aging is accompanied by the accumulation of oxidized proteins. To remove them, cells employ the proteasomal and autophagy-lysosomal systems; however, if the clearance rate is inferior to its formation, protein aggregates form as a hallmark of proteostasis loss. In cells, during stress conditions, actin aggregates accumulate leading to impaired proliferation and reduced proteasomal activity, as observed in cellular senescence. The heat shock protein 90 (Hsp90) is a molecular chaperone that binds and protects the proteasome from oxidative inactivation. We hypothesized that in oxidative stress conditions a malfunction of Hsp90 occurs resulting in the aforementioned protein aggregates. Here, we demonstrate that upon oxidative stress Hsp90 loses its function in a highly specific non-enzymatic iron-catalyzed oxidation event and its breakdown product, a cleaved form of Hsp90 (Hsp90cl), acquires a new function in mediating the accumulation of actin aggregates. Moreover, the prevention of Hsp90 cleavage reduces oxidized actin accumulation, whereas transfection of the cleaved form of Hsp90 leads to an enhanced accumulation of oxidized actin. This indicates a clear role of the Hsp90cl in the aggregation of oxidized proteins.
Numerous preflight investigations were necessary prior to the exposure experiment BIOMEX on the International Space Station to test the basic potential of selected microorganisms to resist or even to be active under Mars-like conditions. In this study, methanogenic archaea, which are anaerobic chemolithotrophic microorganisms whose lifestyle would allow metabolism under the conditions on early and recent Mars, were analyzed. Some strains from Siberian permafrost environments have shown a particular resistance. In this investigation, we analyzed the response of three permafrost strains (Methanosarcina soligelidi SMA-21, Candidatus Methanosarcina SMA-17, Candidatus Methanobacterium SMA-27) and two related strains from non-permafrost environments (Methanosarcina mazei, Methanosarcina barkeri) to desiccation conditions (-80 degrees C for 315 days, martian regolith analog simulants S-MRS and P-MRS, a 128-day period of simulated Mars-like atmosphere). Exposure of the different methanogenic strains to increasing concentrations of magnesium perchlorate allowed for the study of their metabolic shutdown in a Mars-relevant perchlorate environment. Survival and metabolic recovery were analyzed by quantitative PCR, gas chromatography, and a new DNA-extraction method from viable cells embedded in S-MRS and P-MRS. All strains survived the two Mars-like desiccating scenarios and recovered to different extents. The permafrost strain SMA-27 showed an increased methanogenic activity by at least 10-fold after deep-freezing conditions. The methanogenic rates of all strains did not decrease significantly after 128 days S-MRS exposure, except for SMA-27, which decreased 10-fold. The activity of strains SMA-17 and SMA-27 decreased after 16 and 60 days P-MRS exposure. Non-permafrost strains showed constant survival and methane production when exposed to both desiccating scenarios. All strains showed unaltered methane production when exposed to the perchlorate concentration reported at the Phoenix landing site (2.4 mM) or even higher concentrations. We conclude that methanogens from (non-)permafrost environments are suitable candidates for potential life in the martian subsurface and therefore are worthy of study after space exposure experiments that approach Mars-like surface conditions.
BIOMEX (BIOlogy and Mars EXperiment) is an ESA/Roscosmos space exposure experiment housed within the exposure facility EXPOSE-R2 outside the Zvezda module on the International Space Station (ISS). The design of the multiuser facility supports-among others-the BIOMEX investigations into the stability and level of degradation of space-exposed biosignatures such as pigments, secondary metabolites, and cell surfaces in contact with a terrestrial and Mars analog mineral environment. In parallel, analysis on the viability of the investigated organisms has provided relevant data for evaluation of the habitability of Mars, for the limits of life, and for the likelihood of an interplanetary transfer of life (theory of lithopanspermia). In this project, lichens, archaea, bacteria, cyanobacteria, snow/permafrost algae, meristematic black fungi, and bryophytes from alpine and polar habitats were embedded, grown, and cultured on a mixture of martian and lunar regolith analogs or other terrestrial minerals. The organisms and regolith analogs and terrestrial mineral mixtures were then exposed to space and to simulated Mars-like conditions by way of the EXPOSE-R2 facility. In this special issue, we present the first set of data obtained in reference to our investigation into the habitability of Mars and limits of life. This project was initiated and implemented by the BIOMEX group, an international and interdisciplinary consortium of 30 institutes in 12 countries on 3 continents. Preflight tests for sample selection, results from ground-based simulation experiments, and the space experiments themselves are presented and include a complete overview of the scientific processes required for this space experiment and postflight analysis. The presented BIOMEX concept could be scaled up to future exposure experiments on the Moon and will serve as a pretest in low Earth orbit.
A comprehensive description of electromagnetic processes related to equatorial plasma depletions (EPDs) is essential for understanding their evolution and day-to-day variability. Recently, field-aligned currents (FACs) flowing at both western and eastern edges of EPDs were observed to be interhemispheric rather than anti-parallel about the dip equator, as suggested by previous theoretical studies. In this paper, we investigate the spatial and temporal variability of the FACs orientation using simultaneous measurements of electron density and magnetic field gathered by ESA’s Swarm constellation mission. By using empirical models, we assess the role of the Pedersen conductance in the preference of the FACs to close either in the northern or southern magnetic hemisphere. Here we show that the closure of the FACs agrees with an electrostatic regime determined by a hemispherical asymmetry of the Pedersen conductance. That is, the EPD-related FACs close at lower altitudes in the hemisphere with the highest conductivity. The evidence of this conclusion stands on the general agreement between the longitudinal and seasonal variability of both the conductivity and the FACs orientation.