Refine
Year of publication
Document Type
- Article (32676) (remove)
Language
Is part of the Bibliography
- yes (32676) (remove)
Keywords
- climate change (95)
- Germany (71)
- stars: massive (57)
- diffusion (47)
- stars: early-type (47)
- German (46)
- gamma rays: general (46)
- stars: winds, outflows (45)
- Climate change (43)
- Arabidopsis thaliana (42)
Institute
- Institut für Physik und Astronomie (4128)
- Institut für Biochemie und Biologie (3770)
- Institut für Geowissenschaften (2807)
- Institut für Chemie (2404)
- Department Psychologie (1756)
- Institut für Romanistik (1129)
- Wirtschaftswissenschaften (1108)
- Institut für Mathematik (1053)
- Historisches Institut (1048)
- Sozialwissenschaften (1030)
Air pollution is a pressing issue that is associated with adverse effects on human health, ecosystems, and climate. Despite many years of effort to improve air quality, nitrogen dioxide (NO2) limit values are still regularly exceeded in Europe, particularly in cities and along streets. This study explores how concentrations of nitrogen oxides (NOx = NO + NO2) in European urban areas have changed over the last decades and how this relates to changes in emissions. To do so, the incremental approach was used, comparing urban increments (i.e. urban background minus rural concentrations) to total emissions, and roadside increments (i.e. urban roadside concentrations minus urban background concentrations) to traffic emissions. In total, nine European cities were assessed. The study revealed that potentially confounding factors like the impact of urban pollution at rural monitoring sites through atmospheric transport are generally negligible for NOx. The approach proves therefore particularly useful for this pollutant. The estimated urban increments all showed downward trends, and for the majority of the cities the trends aligned well with the total emissions. However, it was found that factors like a very densely populated surrounding or local emission sources in the rural area such as shipping traffic on inland waterways restrict the application of the approach for some cities. The roadside increments showed an overall very diverse picture in their absolute values and trends and also in their relation to traffic emissions. This variability and the discrepancies between roadside increments and emissions could be attributed to a combination of local influencing factors at the street level and different aspects introducing inaccuracies to the trends of the emis-sion inventories used, including deficient emission factors. Applying the incremental approach was evaluated as useful for long-term pan-European studies, but at the same time it was found to be restricted to certain regions and cities due to data availability issues. The results also highlight that using emission inventories for the prediction of future health impacts and compliance with limit values needs to consider the distinct variability in the concentrations not only across but also within cities.
The closed-chamber method is the most common approach to determine CH4 fluxes in peatlands. The concentration change in the chamber is monitored over time, and the flux is usually calculated by the slope of a linear regression function. Theoretically, the gas exchange cannot be constant over time but has to decrease, when the concentration gradient between chamber headspace and soil air decreases. In this study, we test whether we can detect this non- linearity in the concentration change during the chamber closure with six air samples. We expect generally a low concentration gradient on dry sites (hummocks) and thus the occurrence of exponential concentration changes in the chamber due to a quick equilibrium of gas concentrations between peat and chamber headspace. On wet (flarks) and sedge- covered sites (lawns), we expect a high gradient and near-linear concentration changes in the chamber. To evaluate these model assumptions, we calculate both linear and exponential regressions for a test data set (n = 597) from a Finnish mire. We use the Akaike Information Criterion with small sample second order bias correction to select the best-fitted model. 13.6%, 19.2% and 9.8% of measurements on hummocks, lawns and flarks, respectively, were best fitted with an exponential regression model. A flux estimation derived from the slope of the exponential function at the beginning of the chamber closure can be significantly higher than using the slope of the linear regression function. Non-linear concentration-overtime curves occurred mostly during periods of changing water table. This could be due to either natural processes or chamber artefacts, e.g. initial pressure fluctuations during chamber deployment. To be able to exclude either natural processes or artefacts as cause of non-linearity, further information, e.g. CH4 concentration profile measurements in the peat, would be needed. If this is not available, the range of uncertainty can be substantial. We suggest to use the range between the slopes of the exponential regression at the beginning and at the end of the closure time as an estimate of the overall uncertainty.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
The possibilities and limits of structure refinement of Langmuir-Blodgett films by means of symmetrical reflection of X- rays are described using the example of a stearic acid multilayer. Three different techniques for the determiantion of the electron density profile from reflectivity data are compared; a Fourier method, a Patterson method, and model calculations. The important role of the a priori information for finding the besft structure model is outlined.
A comparative whole-genome approach identifies bacterial traits for marine microbial interactions
(2022)
Luca Zoccarato, Daniel Sher et al. leverage publicly available bacterial genomes from marine and other environments to examine traits underlying microbial interactions.
Their results provide a valuable resource to investigate clusters of functional and linked traits to better understand marine bacteria community assembly and dynamics.
Microbial interactions shape the structure and function of microbial communities with profound consequences for biogeochemical cycles and ecosystem health. Yet, most interaction mechanisms are studied only in model systems and their prevalence is unknown. To systematically explore the functional and interaction potential of sequenced marine bacteria, we developed a trait-based approach, and applied it to 473 complete genomes (248 genera), representing a substantial fraction of marine microbial communities.
We identified genome functional clusters (GFCs) which group bacterial taxa with common ecology and life history. Most GFCs revealed unique combinations of interaction traits, including the production of siderophores (10% of genomes), phytohormones (3-8%) and different B vitamins (57-70%). Specific GFCs, comprising Alpha- and Gammaproteobacteria, displayed more interaction traits than expected by chance, and are thus predicted to preferentially interact synergistically and/or antagonistically with bacteria and phytoplankton. Linked trait clusters (LTCs) identify traits that may have evolved to act together (e.g., secretion systems, nitrogen metabolism regulation and B vitamin transporters), providing testable hypotheses for complex mechanisms of microbial interactions.
Our approach translates multidimensional genomic information into an atlas of marine bacteria and their putative functions, relevant for understanding the fundamental rules that govern community assembly and dynamics.
A successful assignment for the fundamental bands observed in the experimental IR spectra of mn-12S(2)O(2) and fn-12S(2)O(2) dithiacrown ethers was achieved by the aid of the density functional theory (DFT) based quantum mechanical calculations carried out at the 133LYP/6-31G(d) and B3LYP/6-31 + G(d) level of theory. Two different scaling approaches, '(i) scaled quantum mechanics force field (SQM FF) methodology', and (ii) the 'scaling frequencies with dual empirical scale factors', were used in order to fit the calculated harmonic frequencies to the experimental ones. Potential energy distribution (PED) calculations were carried out to define the internal coordinate contributions to each normal mode and to define the corresponding normal modes of the molecules. The effects of the conformational differences onto the IR active normal modes of the two isomeric molecules and their corresponding experimental frequencies were discussed in the light of the calculated spectral data.