Refine
Year of publication
- 2020 (202) (remove)
Document Type
- Article (141)
- Postprint (33)
- Doctoral Thesis (23)
- Monograph/Edited Volume (2)
- Other (2)
- Review (1)
Language
- English (202) (remove)
Is part of the Bibliography
- yes (202)
Keywords
- climate change (7)
- remote sensing (6)
- model (5)
- modelling (5)
- models (5)
- Andes (4)
- Chinese loess (4)
- climate-change (4)
- precipitation (4)
- tectonics (4)
Institute
- Institut für Geowissenschaften (202) (remove)
Ground-penetrating radar (GPR) is an established geophysical tool to explore a wide range of near-surface environments. Today, the use of synthetic GPR data is largely limited to 2D because 3D modeling is computationally more expensive. In fact, only recent developments of modeling tools and powerful hardware allow for a time-efficient computation of extensive 3D data sets. Thus, 3D subsurface models and resulting GPR data sets, which are of great interest to develop and evaluate novel approaches in data analysis and interpretation, have not been made publicly available up to now. <br /> We use a published hydrofacies data set of an aquifer-analog study within fluvio-glacial deposits to infer a realistic 3D porosity model showing heterogeneities at multiple spatial scales. Assuming fresh-water saturated sediments, we generate synthetic 3D GPR data across this model using novel GPU-acceleration included in the open-source software gprMax. We present a numerical approach to examine 3D wave-propagation effects in modeled GPR data. Using the results of this examination study, we conduct a spatial model decomposition to enable a computationally efficient 3D simulation of a typical GPR reflection data set across the entire model surface. We process the resulting GPR data set using a standard 3D structural imaging sequence and compare the results to selected input data to demonstrate the feasibility and potential of the presented modeling studies. We conclude on conceivable applications of our 3D GPR reflection data set and the underlying porosity model, which are both publicly available and, thus, can support future methodological developments in GPR and other near-surface geophysical techniques.
(40)A/Ar-39 step-heating of mica and amphibole megacrysts from hauyne-bearing olivine melilitite scoria/tephra from the Zelezna hurka yielded a 435 +/- 108 ka isotope correlation age for phlogopite and a more imprecise 1.55 Ma total gas age of the kaersutite megacryst. The amphibole megacrysts may constitute the first, and the younger phlogopite megacrysts the later phase of mafic, hydrous melilitic magma crystallization. It cannot be ruled out that the amphibole megacrysts are petrogenetically unrelated to tephra and phlogopite megacrysts and were derived from mantle xenoliths or disaggregated older, deep crustal pegmatites. This is in line both with the rarity of amphibole at Zelezna hurka and with the observed signs of magmatic resorption at the edges of amphibole crystals.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Compound natural hazards likeEl Ninoevents cause high damage to society, which to manage requires reliable risk assessments. Damage modelling is a prerequisite for quantitative risk estimations, yet many procedures still rely on expert knowledge, and empirical studies investigating damage from compound natural hazards hardly exist. A nationwide building survey in Peru after theEl Ninoevent 2017 - which caused intense rainfall, ponding water, flash floods and landslides - enables us to apply data-mining methods for statistical groundwork, using explanatory features generated from remote sensing products and open data. We separate regions of different dominant characteristics through unsupervised clustering, and investigate feature importance rankings for classifying damage via supervised machine learning. Besides the expected effect of precipitation, the classification algorithms select the topographic wetness index as most important feature, especially in low elevation areas. The slope length and steepness factor ranks high for mountains and canyons. Partial dependence plots further hint at amplified vulnerability in rural areas. An example of an empirical damage probability map, developed with a random forest model, is provided to demonstrate the technical feasibility.
A ground motion logic tree for seismic hazard analysis in the stable cratonic region of Europe
(2020)
Regions of low seismicity present a particular challenge for probabilistic seismic hazard analysis when identifying suitable ground motion models (GMMs) and quantifying their epistemic uncertainty. The 2020 European Seismic Hazard Model adopts a scaled backbone approach to characterise this uncertainty for shallow seismicity in Europe, incorporating region-to-region source and attenuation variability based on European strong motion data. This approach, however, may not be suited to stable cratonic region of northeastern Europe (encompassing Finland, Sweden and the Baltic countries), where exploration of various global geophysical datasets reveals that its crustal properties are distinctly different from the rest of Europe, and are instead more closely represented by those of the Central and Eastern United States. Building upon the suite of models developed by the recent NGA East project, we construct a new scaled backbone ground motion model and calibrate its corresponding epistemic uncertainties. The resulting logic tree is shown to provide comparable hazard outcomes to the epistemic uncertainty modelling strategy adopted for the Eastern United States, despite the different approaches taken. Comparison with previous GMM selections for northeastern Europe, however, highlights key differences in short period accelerations resulting from new assumptions regarding the characteristics of the reference rock and its influence on site amplification.
The steady increase of ground-motion data not only allows new possibilities but also comes with new challenges in the development of ground-motion models (GMMs). Data classification techniques (e.g., cluster analysis) do not only produce deterministic classifications but also probabilistic classifications (e.g., probabilities for each datum to belong to a given class or cluster). One challenge is the integration of such continuous classification in regressions for GMM development such as the widely used mixed-effects model. We address this issue by introducing an extension of the mixed-effects model to incorporate data weighting. The parameter estimation of the mixed-effects model, that is, fixed-effects coefficients of the GMMs and the random-effects variances, are based on the weighted likelihood function, which also provides analytic uncertainty estimates. The data weighting permits for earthquake classification beyond the classical, expert-driven, binary classification based, for example, on event depth, distance to trench, style of faulting, and fault dip angle. We apply Angular Classification with Expectation-maximization, an algorithm to identify clusters of nodal planes from focal mechanisms to differentiate between, for example, interface- and intraslab-type events. Classification is continuous, that is, no event belongs completely to one class, which is taken into account in the ground-motion modeling. The theoretical framework described in this article allows for a fully automatic calibration of ground-motion models using large databases with automated classification and processing of earthquake and ground-motion data. As an example, we developed a GMM on the basis of the GMM by Montalva et al. (2017) with data from the strong-motion flat file of Bastias and Montalva (2016) with similar to 2400 records from 319 events in the Chilean subduction zone. Our GMM with the data-driven classification is comparable to the expert-classification-based model. Furthermore, the model shows temporal variations of the between-event residuals before and after large earthquakes in the region.
Residential assets, comprising buildings and household contents, are a major source of direct flood losses. Existing damage models are mostly deterministic and limited to particular countries or flood types. Here, we compile building-level losses from Germany, Italy and the Netherlands covering a wide range of fluvial and pluvial flood events. Utilizing a Bayesian network (BN) for continuous variables, we find that relative losses (i.e. loss relative to exposure) to building structure and its contents could be estimated with five variables: water depth, flow velocity, event return period, building usable floor space area and regional disposable income per capita. The model's ability to predict flood losses is validated for the 11 flood events contained in the sample. Predictions for the German and Italian fluvial floods were better than for pluvial floods or the 1993 Meuse river flood. Further, a case study of a 2010 coastal flood in France is used to test the BN model's performance for a type of flood not included in the survey dataset. Overall, the BN model achieved better results than any of 10 alternative damage models for reproducing average losses for the 2010 flood. An additional case study of a 2013 fluvial flood has also shown good performance of the model. The study shows that data from many flood events can be combined to derive most important factors driving flood losses across regions and time, and that resulting damage models could be applied in an open data framework.
We study the source properties of the 2005 Kashmir earthquake and its aftershocks to unravel the seismotectonics of the NW Himalayan syntaxis. The mainshock and larger aftershocks have been simultaneously relocated using phase data. We use back-projection of high-frequency energy from multiple teleseismic arrays to model the spatio-temporal evolution of the mainshock rupture. Our analysis reveal a bilateral rupture, which initially propagated SE and then NW of the epicenter, with an average rupture velocity of similar to 2 km s(-1). The area of maximum energy release is parallel to and bound by the surface rupture. Incorporating rupture propagation and velocity, we model the mainshock as a line source using P- and SH-waveform inversion. Our result confirms that the mainshock occurred on a NE dipping (similar to 35 degrees) fault plane, with centroid depth of similar to 10 km. Integrated source time function show that majority of the energy was released in the first similar to 20 s, and was confined above the hypocenter. From waveform inverted fault dimension and seismic moment, we argue that the mainshock had an additional similar to 25 km blind rupture beyond the NW Himalayan syntaxis. Combining this with findings from previous studies, we conjecture that the blind rupture propagated NW of the syntaxis underneath a weak detachment overlain by infra-Cambrian salt layer, and terminated in a wedge thrust. All moderate-to-large aftershocks, NW of the mainshock rupture, are concentrated at the edge of the blind rupture termination. Source modeling of these aftershocks reveal thrust mechanism with centroid depths of 2-10 km, and fault planes oriented subparallel to the mainshock rupture. To study the influence of mainshock rupture on aftershock occurrence, we compute Coulomb failure stress on aftershock faults. All these aftershocks lie in the positive Coulomb stress change region. This suggest that the aftershocks have been triggered by either co-seismic or post-seismic slip on the mainshock fault.
Other than commonly assumed in seismology, the phase velocity of Rayleigh waves is not necessarily a single-valued function of frequency. In fact, a single Rayleigh mode can exist with three different values of phase velocity at one frequency. We demonstrate this for the first higher mode on a realistic shallow seismic structure of a homogeneous layer of unconsolidated sediments on top of a half-space of solid rock (LOH). In the case of LOH a significant contrast to the half-space is required to produce the phenomenon. In a simpler structure of a homogeneous layer with fixed (rigid) bottom (LFB) the phenomenon exists for values of Poisson's ratio between 0.19 and 0.5 and is most pronounced for P-wave velocity being three times S-wave velocity (Poisson's ratio of 0.4375). A pavement-like structure (PAV) of two layers on top of a half-space produces the multivaluedness for the fundamental mode. Programs for the computation of synthetic dispersion curves are prone to trouble in such cases. Many of them use mode-follower algorithms which loose track of the dispersion curve and miss the multivalued section. We show results for well established programs. Their inability to properly handle these cases might be one reason why the phenomenon of multivaluedness went unnoticed in seismological Rayleigh wave research for so long. For the very same reason methods of dispersion analysis must fail if they imply wave number k(l)(omega) for the lth Rayleigh mode to be a single-valued function of frequency.. This applies in particular to deconvolution methods like phase-matched filters. We demonstrate that a slant-stack analysis fails in the multivalued section, while a Fourier-Bessel transformation captures the complete Rayleigh-wave signal. Waves of finite bandwidth in the multivalued section propagate with positive group-velocity and negative phase-velocity. Their eigenfunctions appear conventional and contain no conspicuous feature.
Most South Asian countries have challenges in ensuring water, energy, and food (WEF) security, which are often interacting positively or negatively. To address these challenges, the nexus approach provides a framework to identify the interactions of the WEF sectors as an integrated system. However, most nexus studies only qualitatively discuss the interactions between these sectors. This study conducts a systematic analysis of the WEF security nexus in South Asia by using open data sources at the country scale. We analyze interactions between the WEF sectors statistically, defining positive and negative correlations between the WEF security indicators as synergies and trade-offs, respectively. By creating networks of the synergies and trade-offs, we further identify most positively and negatively influencing indicators in the WEF security nexus. We observe a larger share of trade-offs than synergies within the water and energy sectors and a larger share of synergies than trade-offs among the WEF sectors for South Asia. However, these observations vary across the South Asian countries. Our analysis highlights that strategies on promoting sustainable energy and discouraging fossil fuel use could have overall positive effects on the WEF security nexus in the countries. This study provides evidence for considering the WEF security nexus as an integrated system rather than just a combination of three different sectors or securities.