Refine
Year of publication
Language
- English (71)
Is part of the Bibliography
- yes (71)
Keywords
- site effects (5)
- KiK-net (4)
- earthquake (3)
- Aleatory variability (2)
- Earthquake hazards (2)
- Epistemic uncertainty (2)
- Europe (2)
- GMPE (2)
- Ground-motion prediction equations (2)
- HVSR (2)
The within-site variability in site response is the randomness in site response at a given site from different earthquakes and is treated as aleatory variability in current seismic hazard/risk analyses.
In this study, we investigate the single-station variability in linear site response at K-NET and KiK-net stations in Japan using a large number of earthquake recordings.
We found that the standard deviation of the horizontal-to-vertical Fourier spectral ratio at individual sites, that is single-station horizontal-to-vertical spectral ratio (HVSR) sigma sigma(HV,s), approximates the within-site variability in site response quantified using surface-to-borehole spectral ratios (for oscillator frequencies higher than the site fundamental frequency) or empirical ground-motion models.
Based on this finding, we then utilize the single-station HVSR sigma as a convenient tool to study the site-response variability at 697 KiK-net and 1169 K-NET sites.
Our results show that at certain frequencies, stiff, rough and shallow sites, as well as small and local events tend to have a higher sigma(HV,s).
However, when being averaged over different sites, the single-station HVSR sigma, that is sigma(HV), increases gradually with decreasing frequency. In the frequency range of 0.25-25 Hz, sigma(HV) is centred at 0.23-0.43 in ln scales (a linear scale factor of 1.26-1.54) with one standard deviation of less than 0.1. sigma(HV) is quite stable across different tectonic regions, and we present a constant, as well as earthquake magnitude- and distance-dependent sigma(HV) models.
In seismic risk assessment, the sources of uncertainty associated with building exposure modelling have not received as much attention as other components related to hazard and vulnerability. Conventional practices such as assuming absolute portfolio compositions (i.e., proportions per building class) from expert-based assumptions over aggregated data crudely disregard the contribution of uncertainty of the exposure upon earthquake loss models. In this work, we introduce the concept that the degree of knowledge of a building stock can be described within a Bayesian probabilistic approach that integrates both expert-based prior distributions and data collection on individual buildings. We investigate the impact of the epistemic uncertainty in the portfolio composition on scenario-based earthquake loss models through an exposure-oriented logic tree arrangement based on synthetic building portfolios. For illustrative purposes, we consider the residential building stock of Valparaiso (Chile) subjected to seismic ground-shaking from one subduction earthquake. We have found that building class reconnaissance, either from prior assumptions by desktop studies with aggregated data (top-down approach), or from building-by-building data collection (bottom-up approach), plays a fundamental role in the statistical modelling of exposure. To model the vulnerability of such a heterogeneous building stock, we require that their associated set of structural fragility functions handle multiple spectral periods. Thereby, we also discuss the relevance and specific uncertainty upon generating either uncorrelated or spatially cross-correlated ground motion fields within this framework. We successively show how various epistemic uncertainties embedded within these probabilistic exposure models are differently propagated throughout the computed direct financial losses. This work calls for further efforts to redesign desktop exposure studies, while also highlighting the importance of exposure data collection with standardized and iterative approaches.
This article presents comparisons among the five ground-motion models described in other articles within this special issue, in terms of data selection criteria, characteristics of the models and predicted peak ground and response spectral accelerations. Comparisons are also made with predictions from the Next Generation Attenuation (NGA) models to which the models presented here have similarities (e.g. a common master database has been used) but also differences (e.g. some models in this issue are nonparametric). As a result of the differing data selection criteria and derivation techniques the predicted median ground motions show considerable differences (up to a factor of two for certain scenarios), particularly for magnitudes and distances close to or beyond the range of the available observations. The predicted influence of style-of-faulting shows much variation among models whereas site amplification factors are more similar, with peak amplification at around 1s. These differences are greater than those among predictions from the NGA models. The models for aleatory variability (sigma), however, are similar and suggest that ground-motion variability from this region is slightly higher than that predicted by the NGA models, based primarily on data from California and Taiwan.
Seismic-hazard assessment is of great importance within the field of engineering seismology. Nowadays, it is common practice to define future seismic demands using probabilistic seismic-hazard analysis (PSHA). Often it is neither obvious nor transparent how PSHA responds to changes in its inputs. In addition, PSHA relies on many uncertain inputs. Sensitivity analysis (SA) is concerned with the assessment and quantification of how changes in the model inputs affect the model response and how input uncertainties influence the distribution of the model response. Sensitivity studies are challenging primarily for computational reasons; hence, the development of efficient methods is of major importance. Powerful local (deterministic) methods widely used in other fields can make SA feasible, even for complex models with a large number of inputs; for example, automatic/algorithmic differentiation (AD)-based adjoint methods. Recently developed derivative-based global sensitivity measures can combine the advantages of such local SA methods with efficient sampling strategies facilitating quantitative global sensitivity analysis (GSA) for complex models. In our study, we propose and implement exactly this combination. It allows an upper bounding of the sensitivities involved in PSHA globally and, therefore, an identification of the noninfluential and the most important uncertain inputs. To the best of our knowledge, it is the first time that derivative-based GSA measures are combined with AD in practice. In addition, we show that first-order uncertainty propagation using the delta method can give satisfactory approximations of global sensitivity measures and allow a rough characterization of the model output distribution in the case of PSHA. An illustrative example is shown for the suggested derivative-based GSA of a PSHA that uses stochastic ground-motion simulations.
In the Next Generation Attenuation West2 (NGA-West2) project, a 3D subsurface structure model (Japan Seismic Hazard Information Station [J-SHIS]) was queried to establish depths to 1.0 and 2.5 km/s velocity isosurfaces for sites without depth measurement in Japan. In this article, we evaluate the depth parameters in the J-SHIS velocity model by comparing them with their corresponding site-specific depth measurements derived from selected KiK-net velocity profiles. The comparison indicates that the J-SHIS model underestimates site depths at shallow sites and overestimates depths at deep sites. Similar issues were also identified in the southern California basin model. Our results also show that these underestimations and over-estimations have a potentially significant impact on ground-motion prediction using NGA-West2 ground-motion models (GMMs). Site resonant period may be considered as an alternative to depth parameter in the site term of a GMM.
The task of downloading comprehensive datasets of event-based seismic waveforms has been made easier through the development of standardized webservices but is still highly nontrivial because the likelihood of temporary network failures or subtle data errors naturally increases when the amount of requested data is in the order of millions of relatively short segments. This is even more challenging because the typical workflow is not restricted to a single massive download but consists of fetching all possible available input data (e.g., with several repeated download executions) for a processing stage producing any desired user-defined output. Here, we present stream2segment, a highly customizable Python 2+3 package helping the user in the entire workflow of downloading, inspecting, and processing event-based seismic data by means of a relational database management system as archiving storage, which has clear performance and usability advantages, and an integrated processing subroutine requiring a configuration file and a single Python function to produce user-defined output. Stream2segment can also produce diagnostic maps or user-defined plots, which, unlike existing tools, do not require external software dependencies and are not static images but instead are interactive browser-based applications ideally suited for data inspection or annotation tasks and subsequent training of classifiers in foreseen supervised machine-learning applications. Stream2segment has already been used as a data quality tool for datasets within the European Integrated Data Archive and to create a weak-motion database (in the form of a so-called flat file) for the stable continental region of Europe in the context of the European Ground Shaking Intensity Model service, in turn an important building block for seismic hazard studies.
The one-dimensional (1-D) approach is still the dominant method to incorporate site effects in engineering applications. To bridge the 1-D to multidimensional site response analysis, we develop quantitative criteria and a reproducible method to identify KiK-net sites with significant deviations from 1-D behavior. We found that 158 out of 354 show two-dimensional (2-D) and three-dimensional (3-D) effects, extending the resonance toward shorter periods at which 2-D or 3-D site effects exceed those of the classic 1-D configurations and imposing an additional amplification to that caused by the impedance contrast alone. Such 2-D and 3-D effects go along with a large within-station ground motion variability. Remarkably, these effects are found to be more pronounced for small impedance contrasts. While it is hardly possible to identify common features in ground motion behavior for stations with similar topography typologies, it is not over-conservative to apply a safety factor to account for 2-D and 3-D site effects in ground motion modeling.
In this study, we analyzed 10 yrs of seismicity in central Italy from 2008 to 2017, a period witnessing more than 1400 earthquakes in the magnitude range 2.5≤Mw≤6.5. The data set includes the main sequences that have occurred in the area, including those associated with the 2009 Mw 6.3 L'Aquila earthquake and the 2016–2017 sequence (Mw 6.2 Amatrice, Mw 6.1 Visso, and Mw 6.5 Norcia earthquakes). We calibrated a local magnitude scale, investigating the impact of changing the reference distance at which the nonparametric attenuation is tied to the zero‐magnitude attenuation function for southern California. We also developed an attenuation model to compute the radiated seismic energy (Es) from the time integral of the squared ground‐motion velocity. Seismic moment (M0) and stress drop (Δσ) were estimated for each earthquake by fitting a ω‐square model to the source spectra obtained by applying a nonparametric spectral inversion. The Δσ‐values vary over three orders of magnitude from about 0.1 to 10 MPa, the larger values associated with the mainshocks. The Δσ‐values describe a lognormal distribution with mean and standard deviation equal to log(Δσ)=(−0.25±0.45) (i.e., the mean Δσ is 0.57 MPa, with a 95% confidence interval from 0.08 to 4.79 MPa). The Δσ variability introduces a spread in the distribution of seismic energy versus moment, with differences in energy up two orders of magnitudes for earthquakes with the same moment. The variability in the high‐frequency spectral levels is captured by the local magnitude (ML), which scales with radiated energy as ML=(−1.59+0.52logEs) for logEs≤10.26 and ML=(−1.38+0.50logEs) otherwise. As the peak ground velocity increases with increasing Δσ, local and energy magnitudes perform better than moment magnitude as predictors for the shaking potential. The availability of different magnitude scales and source parameters for a large earthquake population will help characterize the between‐event ground‐motion variability in central Italy.
We present the results of a consistency check performed over the flatfile extracted from the engineering strong motion (ESM) database. The flatfile includes 23,014 recordings from 2179 earthquakes in the magnitude range from 3.5 to 7.8 that occurred since the 1970s in Europe and Middle East, as presented in the companion article by Lanzano et al. (Bull Earthq Eng, 2018a). The consistency check is developed by analyzing different residual distributions obtained from ad-hoc ground motion prediction equations for the absolute spectral acceleration (SA), displacement and Fourier amplitude spectra (FAS). Only recordings from earthquakes shallower than 40 km are considered in the analysis. The between-event, between-station and event-and-station corrected residuals are computed by applying a mixed-effect regression. We identified those earthquakes, stations, and recordings showing the largest deviations from the GMPE median predictions, and also evaluated the statistical uncertainty on the median model to get insights on the applicable magnitude–distance ranges and the usable period (or frequency) range. We observed that robust median predictions are obtained up to 8 s for SA and up to 20 Hz for FAS, although median predictions for Mw ≥ 7 show significantly larger uncertainties with ‘bumps’ starting above 5 s for SA and below 0.3 Hz for FAS. The between-station variance dominates over the other residual variances, and the dependence of the between-station residuals on logarithm of Vs30 is well-described by a piece-wise linear function with period-dependent slopes and hinge velocity around 580 m/s. Finally, we compared the between-event residuals obtained by considering two different sources of moment magnitude. The results show that, at long periods, the between-event terms from the two regressions have a weak correlation and the overall between-event variability is dissimilar, highlighting the importance of magnitude source in the regression results.
Applying conservation of energy to estimate earthquake frequencies from strain rates and stresses
(2020)
Estimating earthquake occurrence rates from the accumulation rate of seismic moment is an established tool of seismic hazard analysis. We propose an alternative, fault-agnostic approach based on the conservation of energy: the Energy-Conserving Seismicity Framework (ENCOS). Working in energy space has the advantage that the radiated energy is a better predictor of the damage potential of earthquake waves than the seismic moment release. In a region, ENCOS balances the stationary power available to cause earthquakes with the long-term seismic energy release represented by the energy-frequency distribution's first moment. Accumulation and release are connected through the average seismic efficiency, by which we mean the fraction of released energy that is converted into seismic waves. Besides measuring earthquakes in energy, ENCOS differs from moment balance essentially in that the energy accumulation rate depends on the total stress in addition to the strain rate tensor. To validate ENCOS, we exemplarily model the energy-frequency distribution around Southern California. We estimate the energy accumulation rate due to tectonic loading assuming poroelasticity and hydrostasis. Using data from the World Stress Map and assuming the frictional limit to estimate the stress tensor, we obtain a power of 0.8 GW. The uncertainty range, 0.3-2.0GW, originates mainly from the thickness of the seismogenic crust, the friction coefficient on preexisting faults, and models of Global Positioning System (GPS) derived strain rates. Based on a Gutenberg-Richter magnitude-frequency distribution, this power can be distributed over a range of energies consistent with historical earthquake rates and reasonable bounds on the seismic efficiency.