Refine
Year of publication
Document Type
- Article (2750)
- Doctoral Thesis (505)
- Postprint (139)
- Other (73)
- Review (52)
- Monograph/Edited Volume (34)
- Preprint (17)
- Conference Proceeding (12)
- Habilitation Thesis (12)
- Master's Thesis (6)
- Part of a Book (3)
- Bachelor Thesis (2)
- Report (2)
- Moving Images (1)
Keywords
- climate change (50)
- Holocene (44)
- erosion (28)
- Himalaya (26)
- permafrost (26)
- remote sensing (26)
- Climate change (22)
- Tibetan Plateau (21)
- Pollen (20)
- Seismologie (20)
Institute
- Institut für Geowissenschaften (3608) (remove)
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
Situated in an active tectonic region, Santiago de Chile, the country's capital with more than six million inhabitants, faces tremendous earthquake risk. Macroseismic data for the 1985 Valparaiso event show large variations in the distribution of damage to buildings within short distances, indicating strong effects of local sediments on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity and a study was carried out aiming to estimate site amplification derived from horizontal-to- vertical (H/V) spectral ratios from earthquake data (EHV) and ambient noise (NHV), as well as using the standard spectral ratio (SSR) technique with a nearby reference station located on igneous rock. The results lead to the following conclusions: The analysis of earthquake data shows significant dependence on the local geological structure with respect to amplitude and duration. An amplification of ground motion at frequencies higher than the fundamental one can be found. This amplification would not be found when looking at NHV ratios alone. The analysis of NHV spectral ratios shows that they can only provide a lower bound in amplitude for site amplification. P-wave site responses always show lower amplitudes than those derived by S waves, and sometimes even fail to provide some frequencies of amplification. No variability in terms of time and amplitude is observed in the analysis of the H/V ratio of noise. Due to the geological conditions in some parts of the investigated area, the fundamental resonance frequency of a site is difficult to estimate following standard criteria proposed by the SESAME consortium, suggesting that these are too restrictive under certain circumstances.
Situated in an active tectonic region, Santiago de Chile, the country´s capital with more than six million inhabitants, faces tremendous earthquake hazard. Macroseismic data for the 1985 Valparaiso and the 2010 Maule events show large variations in the distribution of damage to buildings within short distances indicating strong influence of local sediments and the shape of the sediment-bedrock interface on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity, and a study was carried out aiming to estimate site amplification derived from earthquake data and ambient noise. The analysis of earthquake data shows significant dependence on the local geological structure with regards to amplitude and duration. Moreover, the analysis of noise spectral ratios shows that they can provide a lower bound in amplitude for site amplification and, since no variability in terms of time and amplitude is observed, that it is possible to map the fundamental resonance frequency of the soil for a 26 km x 12 km area in the northern part of the Santiago de Chile basin. By inverting the noise spectral rations, local shear wave velocity profiles could be derived under the constraint of the thickness of the sedimentary cover which had previously been determined by gravimetric measurements. The resulting 3D model was derived by interpolation between the single shear wave velocity profiles and shows locally good agreement with the few existing velocity profile data, but allows the entire area, as well as deeper parts of the basin, to be represented in greater detail. The wealth of available data allowed further to check if any correlation between the shear wave velocity in the uppermost 30 m (vs30) and the slope of topography, a new technique recently proposed by Wald and Allen (2007), exists on a local scale. While one lithology might provide a greater scatter in the velocity values for the investigated area, almost no correlation between topographic gradient and calculated vs30 exists, whereas a better link is found between vs30 and the local geology. When comparing the vs30 distribution with the MSK intensities for the 1985 Valparaiso event it becomes clear that high intensities are found where the expected vs30 values are low and over a thick sedimentary cover. Although this evidence cannot be generalized for all possible earthquakes, it indicates the influence of site effects modifying the ground motion when earthquakes occur well outside of the Santiago basin. Using the attained knowledge on the basin characteristics, simulations of strong ground motion within the Santiago Metropolitan area were carried out by means of the spectral element technique. The simulation of a regional event, which has also been recorded by a dense network installed in the city of Santiago for recording aftershock activity following the 27 February 2010 Maule earthquake, shows that the model is capable to realistically calculate ground motion in terms of amplitude, duration, and frequency and, moreover, that the surface topography and the shape of the sediment bedrock interface strongly modify ground motion in the Santiago basin. An examination on the dependency of ground motion on the hypocenter location for a hypothetical event occurring along the active San Ramón fault, which is crossing the eastern outskirts of the city, shows that the unfavorable interaction between fault rupture, radiation mechanism, and complex geological conditions in the near-field may give rise to large values of peak ground velocity and therefore considerably increase the level of seismic risk for Santiago de Chile.
The closed-chamber method is the most common approach to determine CH4 fluxes in peatlands. The concentration change in the chamber is monitored over time, and the flux is usually calculated by the slope of a linear regression function. Theoretically, the gas exchange cannot be constant over time but has to decrease, when the concentration gradient between chamber headspace and soil air decreases. In this study, we test whether we can detect this non- linearity in the concentration change during the chamber closure with six air samples. We expect generally a low concentration gradient on dry sites (hummocks) and thus the occurrence of exponential concentration changes in the chamber due to a quick equilibrium of gas concentrations between peat and chamber headspace. On wet (flarks) and sedge- covered sites (lawns), we expect a high gradient and near-linear concentration changes in the chamber. To evaluate these model assumptions, we calculate both linear and exponential regressions for a test data set (n = 597) from a Finnish mire. We use the Akaike Information Criterion with small sample second order bias correction to select the best-fitted model. 13.6%, 19.2% and 9.8% of measurements on hummocks, lawns and flarks, respectively, were best fitted with an exponential regression model. A flux estimation derived from the slope of the exponential function at the beginning of the chamber closure can be significantly higher than using the slope of the linear regression function. Non-linear concentration-overtime curves occurred mostly during periods of changing water table. This could be due to either natural processes or chamber artefacts, e.g. initial pressure fluctuations during chamber deployment. To be able to exclude either natural processes or artefacts as cause of non-linearity, further information, e.g. CH4 concentration profile measurements in the peat, would be needed. If this is not available, the range of uncertainty can be substantial. We suggest to use the range between the slopes of the exponential regression at the beginning and at the end of the closure time as an estimate of the overall uncertainty.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
The importance of cultural ecosystem services in agricultural landscapes is increasingly recognized as agricultural scale enlargement and abandonment affect aesthetic and recreational values of agricultural landscapes. Landscape preference studies addressing these type of values often yield context-specific outcomes, limiting the applicability of their outcomes in landscape policy. Our approach measures the relative importance of landscape features across agricultural landscapes. This approach was applied in the agricultural landscapes of Winterswijk, The Netherlands (n=191) and the Markische Schweiz, Germany (n=113) among visitors in the agricultural landscape. We set up a parallel designed choice experiment, using regionally specific, photorealistic visualizations of four comparable landscape attributes. In the Dutch landscape visitors highly value hedgerows and tree lines, whereas groups of trees and crop diversity are highly valued in the German landscape. Furthermore, we find that differences in relative preference for landscape attributes are, to some extent, explained by socio-cultural background variables such as education level and affinity with agriculture of the visitors. This approach contributes to a better understanding of the cross-regional variation of aesthetic and recreational values and how these values relate to characteristics of the agricultural landscape, which could support the integration of cultural services in landscape policy. (C) 2015 Elsevier B.V. All rights reserved.
In this paper, we analyse the effectiveness of flood management measures based on the concept known as "retaining water in the landscape". The investigated measures include afforestation, micro-ponds and small-reservoirs. A comparative and model-based methodological approach has been developed and applied for three meso-scale catchments located in different European hydro-climatological regions: Poyo (184 km(2)) in the Spanish Mediterranean, Upper Iller (954 km(2)) in the German Alps and Kamp (621 km(2)) in Northeast-Austria representing the Continental hydro-climate. This comparative analysis has found general similarities in spite of the particular differences among studied areas. In general terms, the flood reduction through the concept of "retaining water in the landscape" depends on the following factors: the storage capacity increase in the catchment resulting from such measures, the characteristics of the rainfall event, the antecedent soil moisture condition and the spatial distribution of such flood management measures in the catchment. In general, our study has shown that, this concept is effective for small and medium events, but almost negligible for the largest and less frequent floods: this holds true for all different hydro-climatic regions, and with different land-use, soils and morphological settings.