Refine
Year of publication
Document Type
- Article (12)
- Doctoral Thesis (9)
- Other (2)
- Bachelor Thesis (1)
Keywords
- earthquake (24) (remove)
Large earthquakes can increase the amount of water feeding stream flows, raise groundwater levels, and thus grant plant roots more access to water in water-limited environments. We examine growth and photosynthetic responses of Pine plantations to the Maule M-w 8.8 earthquake in headwater catchments of Chile's Coastal Range. We combine high-resolution wood anatomic (lumen area) and biogeochemical (delta 13C of wood cellulose) proxies of daily to weekly tree growth sampled from trees on floodplains and close to ridge lines. We find that, immediately after the earthquake, at least two out of six tree trees on valley floors had increased lumen area and decreased delta 13C, while trees on hillslopes had a reverse trend. Our results indicate a control of soil water on this response, largely consistent with models that predict how enhanced postseismic vertical soil permeability causes groundwater levels to rise on valley floors, but fall along the ridges. Statistical analysis with boosted regression trees indicates that streamflow discharge gained predictive importance for photosynthetic activity on the ridges, but lost importance on the valley floor after the earthquake. We infer that earthquakes may stimulate ecohydrological conditions favoring tree growth over days to weeks by triggering stomatal opening. The weak and short-lived signals that we identified, however, show that such responses are only valid under water-limited, rather than energy-limited tree, growth. Hence, dendrochronological studies targeted at annual resolution may overlook some earthquake effects on tree vitality.
Efforts have been made in the past to enhance building exposure models on a regional scale with increasing spatial resolutions by integrating different data sources. This work follows a similar path and focuses on the downscaling of the existing SARA exposure model that was proposed for the residential building stock of the communes of Valparaiso and Vina del Mar (Chile). Although this model allowed great progress in harmonising building classes and characterising their differential physical vulnerabilities, it is now outdated, and in any case, it is spatially aggregated over large administrative units. Hence, to more accurately consider the impact of future earthquakes on these cities, it is necessary to employ more reliable exposure models. For such a purpose, we propose updating this existing model through a Bayesian approach by integrating ancillary data that has been made increasingly available from Volunteering Geo-Information (VGI) activities. Its spatial representation is also optimised in higher resolution aggregation units that avoid the inconvenience of having incomplete building-by-building footprints. A worst-case earthquake scenario is presented to calculate direct economic losses and highlight the degree of uncertainty imposed by exposure models in comparison with other parameters used to generate the seismic ground motions within a sensitivity analysis. This example study shows the great potential of using increasingly available VGI to update worldwide building exposure models as well as its importance in scenario-based seismic risk assessment.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
Earthquake site responses or site effects are the modifications of surface geology to seismic waves. How well can we predict the site effects (average over many earthquakes) at individual sites so far? To address this question, we tested and compared the effectiveness of different estimation techniques in predicting the outcrop Fourier site responses separated using the general inversion technique (GIT) from recordings. Techniques being evaluated are (a) the empirical correction to the horizontal-to-vertical spectral ratio of earthquakes (c-HVSR), (b) one-dimensional ground response analysis (GRA), and (c) the square-root-impedance (SRI) method (also called the quarter-wavelength approach). Our results show that c-HVSR can capture significantly more site-specific features in site responses than both GRA and SRI in the aggregate, especially at relatively high frequencies. c-HVSR achieves a "good match" in spectral shape at similar to 80%-90% of 145 testing sites, whereas GRA and SRI fail at most sites. GRA and SRI results have a high level of parametric and/or modeling errors which can be constrained, to some extent, by collecting on-site recordings.
The creation of building exposure models for seismic risk assessment is frequently challenging due to the lack of availability of detailed information on building structures. Different strategies have been developed in recent years to overcome this, including the use of census data, remote sensing imagery and volunteered graphic information (VGI). This paper presents the development of a building-by-building exposure model based exclusively on openly available datasets, including both VGI and census statistics, which are defined at different levels of spatial resolution and for different moments in time. The initial model stemming purely from building-level data is enriched with statistics aggregated at the neighbourhood and city level by means of a Monte Carlo simulation that enables the generation of full realisations of damage estimates when using the exposure model in the context of an earthquake scenario calculation. Though applicable to any other region of interest where analogous datasets are available, the workflow and approach followed are explained by focusing on the case of the German city of Cologne, for which a scenario earthquake is defined and the potential damage is calculated. The resulting exposure model and damage estimates are presented, and it is shown that the latter are broadly consistent with damage data from the 1978 Albstadt earthquake, notwithstanding the differences in the scenario. Through this real-world application we demonstrate the potential of VGI and open data to be used for exposure modelling for natural risk assessment, when combined with suitable knowledge on building fragility and accounting for the inherent uncertainties.
Detecting whether and how river discharge responds to strong earthquake shaking can be time-consuming and prone to operator bias when checking hydrographs from hundreds of gauging stations. We use Bayesian piecewise regression models to show that up to a fifth of all gauging stations across Chile had their largest change in daily streamflow trend on the day of the M-w 8.8 Maule earthquake in 2010. These stations cluster distinctly in the near field though the number of detected streamflow changes varies with model complexity and length of time window considered. Credible seismic streamflow changes at several stations were the highest detectable in eight months, with an increased variance of discharge surpassing the variance of discharge following rainstorms. We conclude that Bayesian piecewise regression sheds new and unbiased insights on the duration, trend, and variance of streamflow response to strong earthquakes, and on how this response compares to that following rainstorms.
Evaluation of a novel application of earthquake HVSR in site-specific amplification estimation
(2020)
Ground response analyses (GRA) model the vertical propagations of SH waves through flat-layered media (1DSH) and are widely carried out to evaluate local site effects in practice. Horizontal-to-vertical spectral ratio (HVSR) technique is a cost-effective approach to extract certain site-specific information, e.g., site fundamental frequency (f(0)), but HVSR values cannot be directly used to approximate the levels of S-wave amplifications. Motivated by the work of Kawase et al. (2019), we propose a procedure to correct earthquake HVSR amplitudes for direct amplification estimations. The empirical correction compensates HVSR by generic vertical amplification spectra categorized by the vertical fundamental frequency (f(0v)) via kappa-means clustering. In this investigation, we evaluate the effectiveness of the corrected HVSR in approximating observed linear amplifications in comparison with 1DSH modellings. We select a total of 90 KiK-net (Kiban Kyoshin network) surface-downhole sites which are found to have no velocity contrasts below their boreholes and thus of which surface-to-borehole spectral ratios (SBSRs) can be taken as their empirical transfer functions (ETFs). 1DSH-based theoretical transfer functions (TTFs) are computed in the linear domain considering uncertainties in Vs profiles through randomizations. Five goodness-of-fit metrics are adopted to gauge the closeness between observed (ETF) and predicted (i.e., TTF and corrected HVSR) amplifications in both amplitude and spectral shape over frequencies from f(0) to 25 Hz. We find that the empirical correction to HVSR is highly effective and achieves a "good match" in both spectral shape and amplitude at the majority of the 90 KiK-net sites, as opposed to less than one-third for the 1DSH modelling. In addition, the empirical correction does not require a velocity model, which GRAs require, and thus has great potentials in seismic hazard assessments.
Along a subduction zone, great megathrust earthquakes recur either after long seismic gaps lasting several decades to centuries or over much shorter periods lasting hours to a few years when cascading successions of earthquakes rupture nearby segments of the fault. We analyze a decade of continuous Global Positioning System observations along the South American continent to estimate changes in deformation rates between the 2010 Maule (M8.8) and 2015 Illapel (M8.3) Chilean earthquakes. We find that surface velocities increased after the 2010 earthquake, in response to continental-scale viscoelastic mantle relaxation and to regional-scale increased degree of interplate locking. We propose that increased locking occurs transiently during a super-interseismic phase in segments adjacent to a megathrust rupture, responding to bending of both plates caused by coseismic slip and subsequent afterslip. Enhanced strain rates during a super-interseismic phase may therefore bring a megathrust segment closer to failure and possibly triggered the 2015 event.
The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER), as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.
The impressive number of stream gauges in Chile, combined with a suite of past and recent large earthquakes, makes Chile a unique natural laboratory to study several streams that recorded responses to multiple seismic events. We document changes in discharge in eight streams in Chile following two or more large earthquakes. In all cases, discharge increases. Changes in discharge occur for peak ground velocities greater than about 7-11cm/s. Above that threshold, the magnitude of both the increase in discharge and the total excess water do not increase with increasing peak ground velocities. While these observations are consistent with previous work in California, they conflict with lab experiments that show that the magnitude of permeability changes increases with increasing amplitude of ground motion. Instead, our study suggests that streamflow responses are binary. Plain Language Summary Earthquakes deform and shake the surface and the ground below. These changes may affect groundwater flows by increasing the permeability along newly formed cracks and/or clearing clogged pores. As a result, groundwater flow may substantially increase after earthquakes and remain elevated for several months. Here we document streamflow anomalies following multiple high magnitude earthquakes in multiple streams in one of the most earthquake prone regions worldwide, Chile. We take advantage of the dense monitoring network in Chile that recorded streamflow since the 1940s. We show that once a critical ground motion is exceeded, streamflow responses to earthquakes can be expected.