Refine
Has Fulltext
- no (7)
Year of publication
- 2022 (7) (remove)
Document Type
- Article (7)
Language
- English (7)
Is part of the Bibliography
- yes (7)
Keywords
- earthquake (2)
- Building exposure modelling (1)
- Chilean subduction zone (1)
- Cologne (1)
- Data collection (1)
- Earthquake loss modelling (1)
- Earthquake scenario (1)
- Epistemic uncertainty (1)
- Faceted taxonomy (1)
- GNSS (1)
In seismic risk assessment, the sources of uncertainty associated with building exposure modelling have not received as much attention as other components related to hazard and vulnerability. Conventional practices such as assuming absolute portfolio compositions (i.e., proportions per building class) from expert-based assumptions over aggregated data crudely disregard the contribution of uncertainty of the exposure upon earthquake loss models. In this work, we introduce the concept that the degree of knowledge of a building stock can be described within a Bayesian probabilistic approach that integrates both expert-based prior distributions and data collection on individual buildings. We investigate the impact of the epistemic uncertainty in the portfolio composition on scenario-based earthquake loss models through an exposure-oriented logic tree arrangement based on synthetic building portfolios. For illustrative purposes, we consider the residential building stock of Valparaiso (Chile) subjected to seismic ground-shaking from one subduction earthquake. We have found that building class reconnaissance, either from prior assumptions by desktop studies with aggregated data (top-down approach), or from building-by-building data collection (bottom-up approach), plays a fundamental role in the statistical modelling of exposure. To model the vulnerability of such a heterogeneous building stock, we require that their associated set of structural fragility functions handle multiple spectral periods. Thereby, we also discuss the relevance and specific uncertainty upon generating either uncorrelated or spatially cross-correlated ground motion fields within this framework. We successively show how various epistemic uncertainties embedded within these probabilistic exposure models are differently propagated throughout the computed direct financial losses. This work calls for further efforts to redesign desktop exposure studies, while also highlighting the importance of exposure data collection with standardized and iterative approaches.
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude???distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude???distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies.
The creation of building exposure models for seismic risk assessment is frequently challenging due to the lack of availability of detailed information on building structures. Different strategies have been developed in recent years to overcome this, including the use of census data, remote sensing imagery and volunteered graphic information (VGI). This paper presents the development of a building-by-building exposure model based exclusively on openly available datasets, including both VGI and census statistics, which are defined at different levels of spatial resolution and for different moments in time. The initial model stemming purely from building-level data is enriched with statistics aggregated at the neighbourhood and city level by means of a Monte Carlo simulation that enables the generation of full realisations of damage estimates when using the exposure model in the context of an earthquake scenario calculation. Though applicable to any other region of interest where analogous datasets are available, the workflow and approach followed are explained by focusing on the case of the German city of Cologne, for which a scenario earthquake is defined and the potential damage is calculated. The resulting exposure model and damage estimates are presented, and it is shown that the latter are broadly consistent with damage data from the 1978 Albstadt earthquake, notwithstanding the differences in the scenario. Through this real-world application we demonstrate the potential of VGI and open data to be used for exposure modelling for natural risk assessment, when combined with suitable knowledge on building fragility and accounting for the inherent uncertainties.
Megathrust earthquakes impose changes of differential stress and pore pressure in the lithosphere-asthenosphere system that are transiently relaxed during the postseismic period primarily due to afterslip, viscoelastic and poroelastic processes.
Especially during the early postseismic phase, however, the relative contribution of these processes to the observed surface deformation is unclear.
To investigate this, we use geodetic data collected in the first 48 days following the 2010 Maule earthquake and a poro-viscoelastic forward model combined with an afterslip inversion.
This model approach fits the geodetic data 14% better than a pure elastic model. Particularly near the region of maximum coseismic slip, the predicted surface poroelastic uplift pattern explains well the observations.
If poroelasticity is neglected, the spatial afterslip distribution is locally altered by up to +/- 40%.
Moreover, we find that shallow crustal aftershocks mostly occur in regions of increased postseismic pore-pressure changes, indicating that both processes might be mechanically coupled.
The main Marmara fault (MMF) extends for 150 km through the Sea of Marmara and forms the only portion of the North Anatolian fault zone that has not ruptured in a large event (Mw >7) for the last 250 yr. Accordingly, this portion is potentially a major source contributing to the seismic hazard of the Istanbul region. On 26 September 2019, a sequence of moderate-sized events started along the MMF only 20 km south of Istanbul and were widely felt by the population. The largest three events, 26 September Mw 5.8 (10:59 UTC), 26 September 2019 Mw 4.1 (11:26 UTC), and 20 January 2020 Mw 4.7 were recorded by numerous strong-motion seismic stations and the resulting ground motions were compared to the predicted means resulting from a set of the most recent ground-motion prediction equations (GMPEs). The estimated residuals were used to investigate the spatial variation of ground motion across the Marmara region. Our results show a strong azimuthal trend in ground-motion residuals, which might indicate systematically repeating directivity effects toward the eastern Marmara region.
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering.
However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood.
In this study, we investigate moderate but damaging earthquakes (M-w 6-7) and characterize ground- motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects.
We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the freq uency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes.
We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations.
Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect.
We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances.
Earthquake site responses or site effects are the modifications of surface geology to seismic waves. How well can we predict the site effects (average over many earthquakes) at individual sites so far? To address this question, we tested and compared the effectiveness of different estimation techniques in predicting the outcrop Fourier site responses separated using the general inversion technique (GIT) from recordings. Techniques being evaluated are (a) the empirical correction to the horizontal-to-vertical spectral ratio of earthquakes (c-HVSR), (b) one-dimensional ground response analysis (GRA), and (c) the square-root-impedance (SRI) method (also called the quarter-wavelength approach). Our results show that c-HVSR can capture significantly more site-specific features in site responses than both GRA and SRI in the aggregate, especially at relatively high frequencies. c-HVSR achieves a "good match" in spectral shape at similar to 80%-90% of 145 testing sites, whereas GRA and SRI fail at most sites. GRA and SRI results have a high level of parametric and/or modeling errors which can be constrained, to some extent, by collecting on-site recordings.