Refine
Has Fulltext
- no (4)
Document Type
- Article (4)
Language
- English (4)
Is part of the Bibliography
- yes (4) (remove)
Keywords
The Ms ∼ 7.7 Sarez-Pamir earthquake of 1911 February 18 is the largest instrumentally recorded earthquake in the Pamir region. It triggered one of the largest landslides of the past century, building a giant natural dam and forming Lake Sarez. As for many strong earthquakes from that time, information about source parameters of the Sarez-Pamir earthquake is limited due to the sparse observations. Here, we present the analysis of analogue seismic records of the Sarez-Pamir earthquake. We have collected, scanned and digitized 26 seismic records from 13 stations worldwide to relocate the epicentre and determine the event's depth (∼26 km) and magnitude (mB7.3 and Ms7.7). The unusually good quality of the digitized waveforms allowed their modelling, revealing an NE-striking sinistral strike-slip focal mechanism in accordance with regional tectonics. The shallow depth and magnitude (Mw7.3) of the earthquake were confirmed. Additionally, we investigated the possible contribution of the landslide to the waveforms and present an alternative source model assuming the landslide and earthquake occurred in close sequence.
The design of an array configuration is an important task in array seismology during experiment planning. Often the array response function (ARF), which depends on the relative position of array stations and frequency content of the incoming signals, is used as the array design criterion. In practice, additional constraints and parameters have to be taken into account, for example, land ownership, site-specific noise levels or characteristics of the seismic sources under investigation. In this study, a flexible array design framework is introduced that implements a customizable scenario modelling and optimization scheme by making use of synthetic seismograms. Using synthetic seismograms to evaluate array performance makes it possible to consider additional constraints. We suggest to use synthetic array beamforming as an array design criterion instead of the ARF. The objective function of the optimization scheme is defined according to the monitoring goals, and may consist of a number of subfunctions. The array design framework is exemplified by designing a seven-station small-scale array to monitor earthquake swarm activity in Northwest Bohemia/Vogtland in central Europe. Two subfunctions are introduced to verify the accuracy of horizontal slowness estimation; one to suppress aliasing effects due to possible secondary lobes of synthetic array beamforming calculated in horizontal slowness space and the other to reduce the event’s mislocation caused by miscalculation of the horizontal slowness vector. Subsequently, a weighting technique is applied to combine the subfunctions into one single scalar objective function to use in the optimization process.
Earthquake source arrays
(2020)
A collection of earthquake sources recorded at a single station, under specific conditions, are considered as a source array (SA), that is interpreted as if earthquake sources originate at the station location and are recorded at the source location. Then, array processing methods, that is array beamforming, are applicable to analyse the recorded signals. A possible application is to use source array multiple event techniques to locate and characterize near-source scatterers and structural interfaces. In this work the aim is to facilitate the use of earthquake source arrays by presenting an automatic search algorithm to configure the source array elements. We developed a procedure to search for an optimal source array element distribution given an earthquake catalogue including accurate origin time and hypocentre locations. The objective function of the optimization process can be flexibly defined for each application to ensure the prerequisites (criteria) of making a source array. We formulated four quantitative criteria as subfunctions and used the weighted sum technique to combine them in one single scalar function. The criteria are: (1) to control the accuracy of the slowness vector estimation using the time domain beamforming method, (2) to measure the waveform coherency of the array elements, (3) to select events with lower location error and (4) to select traces with high energy of specific phases, that is, sp- or ps-phases. The proposed procedure is verified using synthetic data as well as real examples for the Vogtland region in Northwest Bohemia. We discussed the possible application of the optimized source arrays to identify the location of scatterers in the velocity model by presenting a synthetic test and an example using real waveforms.
Earthquakes often rupture across more than one fault segment. If such rupture segmentation occurs on a significant scale, a simple point-source or one-fault model may not represent the rupture process well. As a consequence earthquake characteristics inferred, based on one-source assumptions, may become systematically wrong. This might have effects on follow-up analyses, for example regional stress field inversions and seismic hazard assessments. While rupture segmentation is evident for most M-w > 7 earthquakes, also smaller ones with 5.5 < M-w < 7 can be segmented. We investigate the sensitivity of globally available data sets to rupture segmentation and their resolution to reliably estimate the mechanisms in presence of segmentation. We focus on the sensitivity of InSAR (Interferometric Synthetic Aperture Radar) data in the static near-field and seismic waveforms in the far-field of the rupture and carry out non-linear and Bayesian optimizations of single-source and two-sources kinematic models (double-couple point sources and finite, rectangular sources) using InSAR and teleseismic waveforms separately. Our case studies comprises of four M-w 6-7 earthquakes: the 2009 L'Aquila and 2016 Amatrice (Italy) and the 2005 and 2008 Zhongba (Tibet) earthquakes. We contrast the data misfits of different source complexity by using the Akaike informational criterion (AIC). We find that the AIC method is well suited for data-driven inferences on significant rupture segmentation for the given data sets. This is based on our observation that an AIC-stated significant improvement of data fit for two-segment models over one-segment models correlates with significantly different mechanisms of the two source segments and their average compared to the single-segment mechanism. We attribute these modelled differences to a sufficient sensitivity of the data to resolve rupture segmentation. Our results show that near-field data are generally more sensitive to rupture segmentation of shallow earthquakes than far-field data but that also teleseismic data can resolve rupture segmentation in the studied magnitude range. We further conclude that a significant difference in the modelled source mechanisms for different segmentations shows that an appropriate choice of model segmentation matters for a robust estimation of source mechanisms. It reduces systematic biases and trade-off and thereby improves the knowledge on the rupture. Our study presents a strategy and method to detect significant rupture segmentation such that an appropriate model complexity can be used in the source mechanism inference. A similar, systematic investigation of earthquakes in the range of M-w 5.5-7 could provide important hazard-relevant statistics on rupture segmentation. In these cases single-source models introduce a systematic bias. Consideration of rupture segmentation therefore matters for a robust estimation of source mechanisms of the studied earthquakes.