Refine
Year of publication
Document Type
- Article (77)
- Monograph/Edited Volume (3)
- Other (3)
- Conference Proceeding (1)
- Postprint (1)
Keywords
- Arrayseismologie (3)
- Ground-motion prediction equation (3)
- array seismology (3)
- Erdbebenkatalog (2)
- Erdbebenschwarm 2008/09 (2)
- Ground-motion prediction equations (2)
- Vogtland/West Bohemia (2)
- Vogtland/Westböhmen (2)
- West Bohemia (2)
- computational ethnomusicology (2)
- earthquake catalog (2)
- earthquake swarm 2008/09 (2)
- traditional Georgian music (2)
- Aleatory variability (1)
- Array seismology (1)
- Artem Erkomaishvili (1)
- Attenuation (1)
- Backbone model (1)
- Bagging (1)
- Bayesian networks (1)
- Chile subduction zone (1)
- Correlation (1)
- Crustal earthquakes (1)
- Data processing (1)
- Deconvolution (1)
- Early warning (1)
- Ensembles (1)
- Epistemic uncertainty (1)
- Erdbebenschwarm 2008 (1)
- Europe (1)
- Expert judgment (1)
- Frequency-magnitude distribution (1)
- GPR (1)
- Georgian chant (1)
- Georgische liturgische Gesänge (1)
- Ground penetrating radar (1)
- Ground-motion models (1)
- Ground-motion-model (1)
- Hierarchical model (1)
- Indian Ocean (1)
- Inverse filtering (1)
- Kappa (1)
- Logic tree (1)
- Logic trees (1)
- Manifold (1)
- Middle East (1)
- Mixture model (1)
- Model selection (1)
- Multilevel model (1)
- Non-ergodic PSHA (1)
- P-waves (1)
- Probabilistic forecasting (1)
- Probabilistic seismic hazard analysis and Bayesian inference (1)
- Regional-dependence (1)
- S-waves (1)
- Seismic hazard assessment (1)
- Single-station sigma (1)
- Site amplification (1)
- Site effects (1)
- Stochastic model (1)
- Stress parameter (1)
- Strong-motion data (1)
- Style of faulting (1)
- Swarm earthquakes (1)
- Theoretical seismology (1)
- Tsunamis (1)
- Vertical resolution (1)
- Vogtland (1)
- Wave propagation (1)
- algorithms (1)
- body waves (1)
- computergestützte Musikethnologie (1)
- damage (1)
- earthquake swarm (1)
- earthquakes (1)
- elastic waves (1)
- eruptions (1)
- floods (1)
- geologic hazards (1)
- model validation (1)
- musical scales (1)
- musikalische Tonleitern (1)
- natural hazards (1)
- regression tree (1)
- seismic waves (1)
- signal-to-noise ratio (1)
- swarms (1)
- traditionelle Georgische Musik (1)
- tuning (1)
- volcanic earthquakes (1)
Institute
- Institut für Geowissenschaften (85) (remove)
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.
Response spectra are of fundamental importance in earthquake engineering and represent a standard measure in seismic design for the assessment of structural performance. However, unlike Fourier spectral amplitudes, the relationship of response spectral amplitudes to seismological source, path, and site characteristics is not immediately obvious and might even be considered counterintuitive for high oscillator frequencies. The understanding of this relationship is nevertheless important for seismic-hazard analysis. The purpose of the present study is to comprehensively characterize the variation of response spectral amplitudes due to perturbations of the causative seismological parameters. This is done by calculating the absolute parameter sensitivities (sensitivity coefficients) defined as the partial derivatives of the model output with respect to its input parameters. To derive sensitivities, we apply algorithmic differentiation (AD). This powerful approach is extensively used for sensitivity analysis of complex models in meteorology or aerodynamics. To the best of our knowledge, AD has not been explored yet in the seismic-hazard context. Within the present study, AD was successfully implemented for a proven and extensively applied simulation program for response spectra (Stochastic Method SIMulation [SMSIM]) using the TAPENADE AD tool. We assess the effects and importance of input parameter perturbations on the shape of response spectra for different regional stochastic models in a quantitative way. Additionally, we perform sensitivity analysis regarding adjustment issues of groundmotion prediction equations.
Modern natural hazards research requires dealing with several uncertainties that arise from limited process knowledge, measurement errors, censored and incomplete observations, and the intrinsic randomness of the governing processes. Nevertheless, deterministic analyses are still widely used in quantitative hazard assessments despite the pitfall of misestimating the hazard and any ensuing risks.
In this paper we show that Bayesian networks offer a flexible framework for capturing and expressing a broad range of uncertainties encountered in natural hazard assessments. Although Bayesian networks are well studied in theory, their application to real-world data is far from straightforward, and requires specific tailoring and adaptation of existing algorithms. We offer suggestions as how to tackle frequently arising problems in this context and mainly concentrate on the handling of continuous variables, incomplete data sets, and the interaction of both. By way of three case studies from earthquake, flood, and landslide research, we demonstrate the method of data-driven Bayesian network learning, and showcase the flexibility, applicability, and benefits of this approach.
Our results offer fresh and partly counterintuitive insights into well-studied multivariate problems of earthquake-induced ground motion prediction, accurate flood damage quantification, and spatially explicit landslide prediction at the regional scale. In particular, we highlight how Bayesian networks help to express information flow and independence assumptions between candidate predictors. Such knowledge is pivotal in providing scientists and decision makers with well-informed strategies for selecting adequate predictor variables for quantitative natural hazard assessments.
Slow fourier transform
(2013)
In probabilistic seismic-hazard analysis, epistemic uncertainties are commonly treated within a logic-tree framework in which the branch weights express the degree of belief of an expert in a set of models. For the calculation of the distribution of hazard curves, these branch weights represent subjective probabilities. A major challenge for experts is to provide logically consistent weight estimates (in the sense of Kolmogorovs axioms), to be aware of the multitude of heuristics, and to minimize the biases which affect human judgment under uncertainty. We introduce a platform-independent, interactive program enabling us to quantify, elicit, and transfer expert knowledge into a set of subjective probabilities by applying experimental design theory, following the approach of Curtis and Wood (2004). Instead of determining the set of probabilities for all models in a single step, the computer-driven elicitation process is performed as a sequence of evaluations of relative weights for small subsets of models. From these, the probabilities for the whole model set are determined as a solution of an optimization problem. The result of this process is a set of logically consistent probabilities together with a measure of confidence determined from the amount of conflicting information which is provided by the expert during the relative weighting process. We experiment with different scenarios simulating likely expert behaviors in the context of knowledge elicitation and show the impact this has on the results. The overall aim is to provide a smart elicitation technique, and our findings serve as a guide for practical applications.
The most recent intense earthquake swarm in West Bohemia lasted from 6 October 2008 to January 2009. Starting 12 days after the onset, the University of Potsdam monitored the swarm by a temporary small-aperture seismic array at 10 km epicentral distance. The purpose of the installation was a complete monitoring of the swarm including micro-earthquakes (M (L) < 0). We identify earthquakes using a conventional short-term average/long-term average trigger combined with sliding-window frequency-wavenumber and polarisation analyses. The resulting earthquake catalogue consists of 14,530 earthquakes between 19 October 2008 and 18 March 2009 with magnitudes in the range of -aEuro parts per thousand 1.2 a parts per thousand currency signaEuro parts per thousand M (L) a parts per thousand currency signaEuro parts per thousand 2.7. The small-aperture seismic array substantially lowers the detection threshold to about M (c) = -aEuro parts per thousand 0.4, when compared to the regional networks operating in West Bohemia (M (c) > 0.0). In the course of this work, the main temporal features (frequency-magnitude distribution, propagation of back azimuth and horizontal slowness, occurrence rate of aftershock sequences and interevent-time distribution) of the recent 2008/2009 earthquake swarm are presented and discussed. Temporal changes of the coefficient of variation (based on interevent times) suggest that the swarm earthquake activity of the 2008/2009 swarm terminates by 12 January 2009. During the main phase in our studied swarm period after 19 October, the b value of the Gutenberg-Richter relation decreases from 1.2 to 0.8. This trend is also reflected in the power-law behavior of the seismic moment release. The corresponding total seismic moment release of 1.02x10(17) Nm is equivalent to M (L,max) = 5.4.
Large research initiatives such as the Global Earthquake Model (GEM) or the Seismic HAzard haRmonization in Europe (SHARE) projects concentrate a great collaborative effort on defining a global standard for seismic hazard estimations. In this context, there is an increasing need for identifying ground-motion prediction equations (GMPEs) that can be applied at both global and regional scale. With increasing amounts of strong-motion records that are now available worldwide, observational data can provide a valuable resource to tackle this question. Using the global dataset of Allen and Wald (2009), we evaluate the ability of 11 GMPEs to predict ground-motion in different active shallow crustal regions worldwide. Adopting the approach of Scherbaum et al. (2009), we rank these GMPEs according to their likelihood of having generated the data. In particular, we estimate how strongly data support or reject the models with respect to the state of noninformativeness defined by a uniform weighting. Such rankings derived from this particular global dataset enable us to explore the potential of GMPEs to predict ground motions in their host region and also in other regions depending on the magnitude and distance considered. In the ranking process, we particularly focus on the influence of the distribution of the testing dataset compared with the GMPE's native dataset. One of the results of this study is that some nonindigenous models present a high degree of consistency with the data from a target region. Two models in particular demonstrated a strong power of geographically wide applicability in different geographic regions with respect to the testing dataset: the models of Akkar and Bommer (2010) and Chiou et al. (2010).
Tsunami early warning (TEW) is a challenging task as a decision has to be made within few minutes on the basis of incomplete and error-prone data. Deterministic warning systems have difficulties in integrating and quantifying the intrinsic uncertainties. In contrast, probabilistic approaches provide a framework that handles uncertainties in a natural way. Recently, we have proposed a method using Bayesian networks (BNs) that takes into account the uncertainties of seismic source parameter estimates in TEW. In this follow-up study, the method is applied to 10 recent large earthquakes offshore Sumatra and tested for its performance. We have evaluated both the general model performance given the best knowledge we have today about the source parameters of the 10 events and the corresponding response on seismic source information evaluated in real-time. We find that the resulting site-specific warning level probabilities represent well the available tsunami wave measurements and observations. Difficulties occur in the real-time tsunami assessment if the moment magnitude estimate is severely over- or underestimated. In general, the probabilistic analysis reveals a considerably large range of uncertainties in the near-field TEW. By quantifying the uncertainties the BN analysis provides important additional information to a decision maker in a warning centre to deal with the complexity in TEW and to reason under uncertainty.
The Seismic Hazard Harmonization in Europe (SHARE) project, which began in June 2009, aims at establishing new standards for probabilistic seismic hazard assessment in the Euro-Mediterranean region. In this context, a logic tree for ground-motion prediction in Europe has been constructed. Ground-motion prediction equations (GMPEs) and weights have been determined so that the logic tree captures epistemic uncertainty in ground-motion prediction for six different tectonic regimes in Europe. Here we present the strategy that we adopted to build such a logic tree. This strategy has the particularity of combining two complementary and independent approaches: expert judgment and data testing. A set of six experts was asked to weight pre-selected GMPEs while the ability of these GMPEs to predict available data was evaluated with the method of Scherbaum et al. (Bull Seismol Soc Am 99:3234-3247, 2009). Results of both approaches were taken into account to commonly select the smallest set of GMPEs to capture the uncertainty in ground-motion prediction in Europe. For stable continental regions, two models, both from eastern North America, have been selected for shields, and three GMPEs from active shallow crustal regions have been added for continental crust. For subduction zones, four models, all non-European, have been chosen. Finally, for active shallow crustal regions, we selected four models, each of them from a different host region but only two of them were kept for long periods. In most cases, a common agreement has been also reached for the weights. In case of divergence, a sensitivity analysis of the weights on the seismic hazard has been conducted, showing that once the GMPEs have been selected, the associated set of weights has a smaller influence on the hazard.
Magnitude estimation for microseismicity induced during the KTB 2004/2005 injection experiment
(2011)
We determined the magnitudes of 2540 microseismic events measured at one single 3C borehole geophone at the German Deep Drilling Site (known by the German acronym, KTB) during the injection phase 2004/2005. For this task we developed a three-step approach. First, we estimated local magnitudes of 104 larger events with a standard method based on amplitude measurements at near-surface stations. Second, we investigated a series of parameters to characterize the size of these events using the seismograms of the borehole sensor, and we compared them statistically with the local magnitudes. Third, we extrapolated the regression curve to obtain the magnitudes of 2436 events that were only measured at the borehole geophone. This method improved the magnitude of completeness for the KTB data set by more than one order down to M = -2.75. The resulting b-value for all events was 0.78, which is similar to the b-value obtained from taking only the greater events with standard local magnitude estimation from near-surface stations, b = 0.86. The more complete magnitude catalog was required to study the magnitude distribution with time and to characterize the seismotectonic state of the KTB injection site. The event distribution with time was consistent with prediction from theory assuming pore pressure diffusion as the underlying mechanism to trigger the events. The value we obtained for the seismogenic index of -4 suggested that the seismic hazard potential at the KTB site is comparatively low.
Bayesian networks are a powerful and increasingly popular tool for reasoning under uncertainty, offering intuitive insight into (probabilistic) data-generating processes. They have been successfully applied to many different fields, including bioinformatics. In this paper, Bayesian networks are used to model the joint-probability distribution of selected earthquake, site, and ground-motion parameters. This provides a probabilistic representation of the independencies and dependencies between these variables. In particular, contrary to classical regression, Bayesian networks do not distinguish between target and predictors, treating each variable as random variable. The capability of Bayesian networks to model the ground-motion domain in probabilistic seismic hazard analysis is shown for a generic situation. A Bayesian network is learned based on a subset of the Next Generation Attenuation (NGA) dataset, using 3342 records from 154 earthquakes. Because no prior assumptions about dependencies between particular parameters are made, the learned network displays the most probable model given the data. The learned network shows that the ground-motion parameter (horizontal peak ground acceleration, PGA) is directly connected only to the moment magnitude, Joyner-Boore distance, fault mechanism, source-to-site azimuth, and depth to a shear-wave horizon of 2: 5 km/s (Z2.5). In particular, the effect of V-S30 is mediated by Z2.5. Comparisons of the PGA distributions based on the Bayesian networks with the NGA model of Boore and Atkinson (2008) show a reasonable agreement in ranges of good data coverage.
One of the key challenges in the context of local site effect studies is the determination of frequencies where the shakeability of the ground is enhanced. In this context, the H/V technique has become increasingly popular and peak frequencies of H/V spectral ratio are sometimes interpreted as resonance frequencies of the transmission response. In the present study, assuming that Rayleigh surface wave is dominant in H/V spectral ratio, we analyse theoretically under which conditions this may be justified and when not. We focus on 'layer over half-space' models which, although seemingly simple, capture many aspects of local site effects in real sedimentary structures. Our starting point is the ellipticity of Rayleigh waves. We use the exact formula of the H/V-ratio presented by Malischewsky & Scherbaum (2004) to investigate the main characteristics of peak and trough frequencies. We present a simple formula illustrating if and where H/V-ratio curves have sharp peaks in dependence of model parameters. In addition, we have constructed a map, which demonstrates the relation between the H/V-peak frequency and the peak frequency of the transmission response in the domain of the layer's Poisson ratio and the impedance contrast. Finally, we have derived maps showing the relationship between the H/V-peak and trough frequency and key parameters of the model such as impedance contrast. These maps are seen as diagnostic tools, which can help to guide the interpretation of H/V spectral ratio diagrams in the context of site effect studies.
Logic trees have become the most popular tool for the quantification of epistemic uncertainties in probabilistic seismic hazard assessment (PSHA). In a logic-tree framework, epistemic uncertainty is expressed in a set of branch weights, by which an expert or an expert group assigns degree-of-belief values to the applicability of the corresponding branch models. Despite the popularity of logic-trees, however, one finds surprisingly few clear commitments to what logic-tree branch weights are assumed to be (even by hazard analysts designing logic trees). In the present paper we argue that it is important for hazard analysts to accept the probabilistic framework from the beginning for assigning logic-tree branch weights. In other words, to accept that logic-tree branch weights are probabilities in the axiomatic sense, independent of one's preference for the philosophical interpretation of probabilities. We demonstrate that interpreting logic-tree branch weights merely as a numerical measure of "model quality," which are then subsequently normalized to sum up to unity, will with increasing number of models inevitably lead to an apparent insensitivity of hazard curves on the logic-tree branch weights, which may even be mistaken for robustness of the results. Finally, we argue that assigning logic-tree branch weights in a sequential fashion may improve their logical consistency.
Enhancing the resolution and accuracy of surface ground-penetrating radar (GPR) reflection data by inverse filtering to recover a zero-phased band-limited reflectivity image requires a deconvolution technique that takes the mixed-phase character of the embedded wavelet into account. In contrast, standard stochastic deconvolution techniques assume that the wavelet is minimum phase and, hence, often meet with limited success when applied to GPR data. We present a new general-purpose blind deconvolution algorithm for mixed-phase wavelet estimation and deconvolution that (1) uses the parametrization of a mixed-phase wavelet as the convolution of the wavelet's minimum-phase equivalent with a dispersive all-pass filter, (2) includes prior information about the wavelet to be estimated in a Bayesian framework, and (3) relies on the assumption of a sparse reflectivity. Solving the normal equations using the data autocorrelation function provides an inverse filter that optimally removes the minimum-phase equivalent of the wavelet from the data, which leaves traces with a balanced amplitude spectrum but distorted phase. To compensate for the remaining phase errors, we invert in the frequency domain for an all-pass filter thereby taking advantage of the fact that the action of the all-pass filter is exclusively contained in its phase spectrum. A key element of our algorithm and a novelty in blind deconvolution is the inclusion of prior information that allows resolving ambiguities in polarity and timing that cannot be resolved using the sparseness measure alone. We employ a global inversion approach for non-linear optimization to find the all-pass filter phase values for each signal frequency. We tested the robustness and reliability of our algorithm on synthetic data with different wavelets, 1-D reflectivity models of different complexity, varying levels of added noise, and different types of prior information. When applied to realistic synthetic 2-D data and 2-D field data, we obtain images with increased temporal resolution compared to the results of standard processing.
Earthquake rupture length and width estimates are in demand in many seismological applications. Earthquake magnitude estimates are often available, whereas the geometrical extensions of the rupture fault mostly are lacking. Therefore, scaling relations are needed to derive length and width from magnitude. Most frequently used are the relationships of Wells and Coppersmith (1994) derived on the basis of a large dataset including all slip types with the exception of thrust faulting events in subduction environments. However, there are many applications dealing with earthquakes in subduction zones because of their high seismic and tsunamigenic potential. There are no well-established scaling relations for moment magnitude and length/width for subduction events. Within this study, we compiled a large database of source parameter estimates of 283 earthquakes. All focal mechanisms are represented, but special focus is set on (large) subduction zone events, in particular. Scaling relations were fitted with linear least-square as well as orthogonal regression and analyzed regarding the difference between continental and subduction zone/oceanic relationships. Additionally, the effect of technical progress in earthquake parameter estimation on scaling relations was tested as well as the influence of different fault mechanisms. For a given moment magnitude we found shorter but wider rupture areas of thrust events compared to Wells and Coppersmith (1994). The thrust event relationships for pure continental and pure subduction zone rupture areas were found to be almost identical. The scaling relations differ significantly for slip types. The exclusion of events prior to 1964 when the worldwide standard seismic network was established resulted in a remarkable effect on strike-slip scaling relations: the data do not show any saturation of rupture width of strike- slip earthquakes. Generally, rupture area seems to scale with mean slip independent of magnitude. The aspect ratio L/W, however, depends on moment and differs for each slip type.
Shallowly situated evaporites in built-up areas are of relevance for urban and cultural development and hydrological regulation. The hazard of sinkholes, subrosion depressions and gypsum karst is often difficult to evaluate and may quickly change with anthropogenic influence. The geophysical exploration of evaporites in metropolitan areas is often not feasible with active industrial techniques. We collect and combine different passive geophysical data as microgravity, ambient vibrations, deformation and hydrological information to study the roof morphology of shallow evaporites beneath Hamburg, Northern Germany. The application of a novel gravity inversion technique leads to a 3-D depth model of the salt diapir under study. We compare the gravity-based depth model to pseudo-depths from H/V measurements and depth estimates from small-scale seismological array data. While the general range and trend of the diapir roof is consistent, a few anomalous regions are identified where H/V pseudo-depths indicate shallower structures not observed in gravity or array data. These are interpreted by shallow residual caprock floaters and zones of increased porosity. The shallow salt structure clearly correlates with a relative subsidence in the order of 2 mm yr(-1). The combined interpretation of roof morphology, yearly subsidence rates, chemical analyses of groundwater and of hydraulic head in aquifers indicates that the salt diapir beneath Hamburg is subject to significant ongoing dissolution that may possibly affect subrosion depressions, sinkhole distribution and land usage. The combined analysis of passive geophysical data may be exemplary for the study of shallow evaporites beneath other urban areas.