Refine
Year of publication
- 2014 (228) (remove)
Document Type
- Article (196)
- Doctoral Thesis (19)
- Preprint (8)
- Review (4)
- Postprint (1)
Keywords
- Earthquake source observations (7)
- Holocene (6)
- Tibetan Plateau (4)
- Erosion (3)
- Himalaya (3)
- Seismicity and tectonics (3)
- Aleatory variability (2)
- Andes (2)
- Baseline shift (2)
- Body waves (2)
Institute
- Institut für Geowissenschaften (228) (remove)
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this thesis I developed a picking free automated method based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. I used the STA/LTA of several characteristic functions in order to increase the sensitiveness to the P wave and the S waves. For the P phases we use the STA/LTA traces of the vertical energy function, while for the S phases, we use the STA/LTA traces of the horizontal energy trace and then a more optimized characteristic function which is obtained using the principal component analysis technique. The orientation of the horizontal components can be retrieved by robust and linear approach of waveform comparison between stations within a network using seismic sources outside the network (chapter 2). To locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial and temporal coordinates of the seismic event. Location uncertainties are then estimated by perturbing the STA/LTA parameters (i.e the length of both long and short time windows) and relocating each event several times. In order to test the location method I firstly applied it to a set of 200 synthetic events. Then we applied it to two different real datasets. A first one related to mining induced microseismicity in a coal mine in the northern Germany (chapter 3). In this case we successfully located 391 microseismic event with magnitude range between 0.5 and 2.0 Ml. To further validate the location method I compared the retrieved locations with those obtained by manual picking procedure. The second dataset consist in a pilot application performed in the Campania-Lucania region (southern Italy) using a 33 stations seismic network (Irpinia Seismic Network) with an aperture of about 150 km (chapter 4). We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < Ml < 2.7. A subset of these locations were compared with accurate locations retrieved by a manual location procedure based on the use of a double difference technique. In both cases results indicate good agreement with manual locations. Moreover, the waveform stacking location method results noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
Identification and characterization of growing large-scale en-echelon fractures in a salt mine
(2014)
The spatiotemporal seismicity of acoustic emission (AE) events recorded in the Morsleben salt mine is investigated. Almost a year after backfilling of the cavities from 2003, microevents are distributed with distinctive stripe shapes above cavities at different depth levels. The physical forces driving the creation of these stripes are still unknown. This study aims to find the active stripes and track fracture developments over time by combining two different temporal and spatial clustering techniques into a single methodological approach. Anomalous seismicity parameters values like sharp b-value changes for two active stripes are good indicators to explain possible stress accumulation at the stripe tips. We identify the formation of two new seismicity stripes and show that the AE activities in active clusters are migrated mostly unidirectional to eastward and upward. This indicates that the growth of underlying macrofractures is controlled by the gradient of extensional stress. Studying size distribution characteristic in terms of frequency-magnitude distribution and b-value in active phase and phase with constant seismicity rate show that deviations from the Gutenberg-Richter power law can be explained by the inclusion of different activity phases: (1) the inactive period before the formation of macrofractures, which is characterized by a deficit of larger events (higher b-values) and (2) the period of fracture growth characterized by the occurrence of larger events (smaller b-values).
A spatially localized seismic sequence originated few tens of kilometres offshore the Mediterranean coast of Spain, close to the Ebro river delta, starting on 2013 September 5, and lasting at least until 2013 October. The sequence culminated in a maximal moment magnitude M-w 4.3 earthquake, on 2013 October 1. The most relevant seismogenic feature in the area is the Fosa de Amposta fault system, which includes different strands mapped at different distances to the coast, with a general NE-SW orientation, roughly parallel to the coastline. However, no significant known historical seismicity has involved this fault system in the past. The epicentral region is also located near the offshore platform of the Castor project, where gas is conducted through a pipeline from mainland and where it was recently injected in a depleted oil reservoir, at about 2 km depth. We analyse the temporal evolution of the seismic sequence and use full waveform techniques to derive absolute and relative locations, estimate depths and focal mechanisms for the largest events in the sequence (with magnitude mbLg larger than 3), and compare them to a previous event (2012 April 8, mbLg 3.3) taking place in the same region prior to the gas injection. Moment tensor inversion results show that the overall seismicity in this sequence is characterized by oblique mechanisms with a normal fault component, with a 30A degrees low-dip angle plane oriented NNE-SSW and a subvertical plane oriented NW-SE. The combined analysis of hypocentral location and focal mechanisms could indicate that the seismic sequence corresponds to rupture processes along shallow low-dip surfaces, which could have been triggered by the gas injection in the reservoir, and excludes the activation of the Amposta fault, as its known orientation is inconsistent with focal mechanism results. An alternative scenario includes the iterated triggering of a system of steep faults oriented NW-SE, which were identified by prior marine seismics investigations.
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterized by low signal-to-noise ratio, automated location methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this paper, we propose a picking free earthquake location method based on the use of the short-term-average/long-term-average (STA/LTA) traces at different stations as observed data. For the P phases, we use the STA/LTA traces of the vertical energy function, whereas for the S phases, we use the STA/LTA traces of a second characteristic function, which is obtained using the principal component analysis technique. In order to locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a 3-D grid, we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial coordinates of the seismic event. A pilot application was performed in the Campania-Lucania region (southern Italy) using a seismic network (Irpinia Seismic Network) with an aperture of about 150 km. We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < M-L < 2.7. A subset of these locations were compared with accurate manual locations refined by using a double-difference technique. Our results indicate a good agreement with manual locations. Moreover, our method is noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
Permafrost-affected soils are among the most obvious ecosystems in which current microbial controls on organic matter decomposition are changing as a result of global warming. Warmer conditions in polygonal tundra will lead to a deepening of the seasonal active layer, provoking changes in microbial processes and possibly resulting in exacerbated carbon degradation under increasing anoxic conditions. To identify current microbial assemblages in carbon rich, water saturated permafrost environments, four polygonal tundra sites were investigated on Herschel Island and the Yukon Coast, Western Canadian Arctic. Ion Torrent sequencing of bacterial and archaeal 16S rRNA amplicons revealed the presence of all major microbial soil groups and indicated a local, vertical heterogeneity of the polygonal tundra soil community with increasing depth. Microbial diversity was found to be highest in the surface layers, decreasing towards the permafrost table. Quantitative PCR analysis of functional genes involved in carbon and nitrogen-cycling revealed a high functional potential in the surface layers, decreasing with increasing active layer depth. We observed that soil properties driving microbial diversity and functional potential varied in each study site. These results highlight the small-scale heterogeneity of geomorphologically comparable sites, greatly restricting generalizations about the fate of permafrost-affected environments in a warming Arctic.
Flood hazard projections under climate change are typically derived by applying model chains consisting of the following elements: "emission scenario - global climate model - downscaling, possibly including bias correction hydrological model - flood frequency analysis". To date, this approach yields very uncertain results, due to the difficulties of global and regional climate models to represent precipitation. The implementation of such model chains requires major efforts, and their complexity is high.
We propose for the Mekong River an alternative approach which is based on a shortened model chain: "emission scenario - global climate model - non-stationary flood frequency model". The underlying idea is to use a link between the Western Pacific monsoon and local flood characteristics: the variance of the monsoon drives a non-stationary flood frequency model, yielding a direct estimate of flood probabilities. This approach bypasses the uncertain precipitation, since the monsoon variance is derived from large-scale wind fields which are better represented by climate models. The simplicity of the monsoon-flood link allows deriving large ensembles of flood projections under climate change. We conclude that this is a worthwhile, complementary approach to the typical model chains in catchments where a substantial link between climate and floods is found.
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.
Flood estimation and flood management have traditionally been the domain of hydrologists, water resources engineers and statisticians, and disciplinary approaches abound. Dominant views have been shaped; one example is the catchment perspective: floods are formed and influenced by the interaction of local, catchment-specific characteristics, such as meteorology, topography and geology. These traditional views have been beneficial, but they have a narrow framing. In this paper we contrast traditional views with broader perspectives that are emerging from an improved understanding of the climatic context of floods. We come to the following conclusions: (1) extending the traditional system boundaries (local catchment, recent decades, hydrological/hydraulic processes) opens up exciting possibilities for better understanding and improved tools for flood risk assessment and management. (2) Statistical approaches in flood estimation need to be complemented by the search for the causal mechanisms and dominant processes in the atmosphere, catchment and river system that leave their fingerprints on flood characteristics. (3) Natural climate variability leads to time-varying flood characteristics, and this variation may be partially quantifiable and predictable, with the perspective of dynamic, climate-informed flood risk management. (4) Efforts are needed to fully account for factors that contribute to changes in all three risk components (hazard, exposure, vulnerability) and to better understand the interactions between society and floods. (5) Given the global scale and societal importance, we call for the organization of an international multidisciplinary collaboration and data-sharing initiative to further understand the links between climate and flooding and to advance flood research.
In this paper we quantify the sediment dynamics in the formerly glaciated Zielbach catchment in the Italian Alps from the end of the Last Glacial Maximum (LGM) until today. As a basis for our quantification, we use the stratigraphic record offered by a 3.5 km(2) large fan that we explore with a seismic survey, stratigraphic analyses of drillhole material, and C-14 ages measured on organic matter encountered in these drillings. In addition, we calculate past denudation rate variability in the fan deposits using concentrations of cosmogenic Be-10. We merge this information into a scenario of how the sediment flux has changed through time and how this variability can be related to climatic variations, framed within well-known paraglacial models. The results document a highly complex natural system. From the LGM to the very early Holocene, ice-melted discharge and climate variability promoted a high sediment flux (sedimentation rate up to 40 mm/yr). This flux then dramatically decreased toward interglacial values (0.8 mm/yr at 5-4 calibrated kyr B.P.). However, in contrast to the trend of classic paraglacial models, the flux recorded at Zielbach shows secondary peaks at 6.5 ka and 2.5 ka, with values of 13 mm/yr and 1.5 mm/yr, respectively. Paleo-denudation rates also decrease from similar to 33 mm/yr at the beginning of the Holocene to 0.42 mm/yr at 5 ka, with peaks of similar to 6 mm/yr and 1.1 mm/yr at 6.5 ka and 2.5 ka. High-amplitude climate change is the most likely cause of the secondary peaks, but anthropogenic activities may have contributed as well. The good correlation between paleo-sedimentation and paleo-denudation rates suggests that the majority of the deglaciated material destocked from the Zielbach catchment is stored in the alluvial fan.
This article presents comparisons among the five ground-motion models described in other articles within this special issue, in terms of data selection criteria, characteristics of the models and predicted peak ground and response spectral accelerations. Comparisons are also made with predictions from the Next Generation Attenuation (NGA) models to which the models presented here have similarities (e.g. a common master database has been used) but also differences (e.g. some models in this issue are nonparametric). As a result of the differing data selection criteria and derivation techniques the predicted median ground motions show considerable differences (up to a factor of two for certain scenarios), particularly for magnitudes and distances close to or beyond the range of the available observations. The predicted influence of style-of-faulting shows much variation among models whereas site amplification factors are more similar, with peak amplification at around 1s. These differences are greater than those among predictions from the NGA models. The models for aleatory variability (sigma), however, are similar and suggest that ground-motion variability from this region is slightly higher than that predicted by the NGA models, based primarily on data from California and Taiwan.
Marine seismology usually relies on temporary deployments of stand alone seismic ocean bottom stations (OBS), which are initialized and synchronized on ship before deployment and re-synchronized and stopped on ship after recovery several months later. In between, the recorder clocks may drift and float at unknown rates. If the clock drifts are large or not linear and cannot be corrected for, seismological applications will be limited to methods not requiring precise common timing. Therefore, for example, array seismological methods, which need very accurate timing between individual stations, would not be applicable for such deployments.
We use an OBS test-array of 12 stations and 75 km aperture, deployed for 10 months in the deep sea (4.5-5.5 km) of the mid-eastern Atlantic. The experiment was designed to analyse the potential of broad-band array seismology at the seafloor. After recovery, we identified some stations which either show unusual large clock drifts and/or static time offsets by having a large difference between the internal clock and the GPS-signal (skew).
We test the approach of ambient noise cross-correlation to synchronize clocks of a deep water OBS array with km-scale interstation distances. We show that small drift rates and static time offsets can be resolved on vertical components with a standard technique. Larger clock drifts (several seconds per day) can only be accurately recovered if time windows of one input trace are shifted according to the expected drift between a station pair before the cross-correlation. We validate that the drifts extracted from the seismometer data are linear to first order. The same is valid for most of the hydrophones. Moreover, we were able to determine the clock drift at a station where no skew could be measured. Furthermore, we find that instable apparent drift rates at some hydrophones, which are uncorrelated to the seismometer drift recorded at the same digitizer, indicate a malfunction of the hydrophone.
Crustal earthquake swarms are an expression of intensive cracking and rock damaging over periods of days, weeks or month in a small source region in the crust. They are caused by longer lasting stress changes in the source region. Often, the localized stressing of the crust is associated with fluid or gas migration, possibly in combination with pre-existing zones of weaknesses. However, verifying and quantifying localized fluid movement at depth remains difficult since the area affected is small and geophysical prospecting methods often cannot reach the required resolution.
We apply a simple and robust method to estimate the velocity ratio between compressional (P) and shear (S) waves (upsilon(P)/upsilon(S)-ratio) in the source region of an earthquake swarm. The upsilon(P)/upsilon(S)-ratio may be unusual small if the swarm is related to gas in a porous or fractured rock. The method uses arrival time difference between P and S waves observed at surface seismic stations, and the associated double differences between pairs of earthquakes. An advantage is that earthquake locations are not required and the method seems lesser dependent on unknown velocity variations in the crust outside the source region. It is, thus, suited for monitoring purposes.
Applications comprise three natural, mid-crustal (8-10 km) earthquake swarms between 1997 and 2008 from the NW-Bohemia swarm region. We resolve a strong temporal decrease of upsilon(P)/upsilon(S) before and during the main activity of the swarm, and a recovery of upsilon(P)/upsilon(S) to background levels at the end of the swarms. The anomalies are interpreted in terms of the Biot-Gassman equations, assuming the presence of oversaturated fluids degassing during the beginning phase of the swarm activity.
Hydrosedimentological studies conducted in the semiarid Upper Jaguaribe Basin, Brazil, enabled the identification of the key processes controlling sediment connectivity at different spatial scales (10(0)-10(4) km(2)).
Water and sediment fluxes were assessed from discharge, sediment concentrations and reservoir siltation measurements. Additionally, mathematical modelling (WASA-SED model) was used to quantify water and sediment transfer within the watershed.
Rainfall erosivity in the study area was moderate (4600 MJ mm ha(-1) h(-1) year(-1)), whereas runoff depths (16-60 mm year(-1)), and therefore the sediment transport capacity, were low. Consequently, similar to 60 % of the eroded sediment was deposited along the landscape, regardless of the spatial scale. The existing high-density reservoir network (contributing area of 6 km(2) per reservoir) also limits sediment propagation, retaining up to 47 % of the sediment at the large basin scale. The sediment delivery ratio (SDR) decreased with the spatial scale; on average, 41 % of the eroded sediment was yielded from the hillslopes, while for the whole 24,600-km(2) basin, the SDR was reduced to 1 % downstream of a large reservoir (1940-hm(3) capacity).
Hydrological behaviour in the Upper Jaguaribe Basin represents a constraint on sediment propagation; low runoff depth is the main feature breaking sediment connectivity, which limits sediment transference from the hillslopes to the drainage system. Surface reservoirs are also important barriers, but their relative importance to sediment retention increases with scale, since larger contributing areas are more suitable for the construction of dams due to higher hydrological potential.
In a study from 2008, Lariviere and colleagues showed, for the field of natural sciences and engineering, that the median age of cited references is increasing over time. This result was considered counterintuitive: with the advent of electronic search engines, online journal issues and open access publications, one could have expected that cited literature is becoming younger. That study has motivated us to take a closer look at the changes in the age distribution of references that have been cited in water resources journals since 1965. Not only could we confirm the findings of Lariviere and colleagues. We were also able to show that the aging is mainly happening in the oldest 10-25% of an average reference list. This is consistent with our analysis of top-cited papers in the field of water resources. Rankings based on total citations since 1965 consistently show the dominance of old literature, including text books and research papers in equal shares. For most top-cited old-timers, citations are still growing exponentially. There is strong evidence that most citations are attracted by publications that introduced methods which meanwhile belong to the standard toolset of researchers and practitioners in the field of water resources. Although we think that this trend should not be overinterpreted as a sign of stagnancy, there might be cause for concern regarding how authors select their references. We question the increasing citation of textbook knowledge as it holds the risk that reference lists become overcrowded, and that the readability of papers deteriorates.
We suggest a new clustering approach to classify focal mechanisms from large moment tensor catalogues, with the purpose of automatically identify families of earthquakes with similar source geometry, recognize the orientation of most active faults, and detect temporal variations of the rupture processes. The approach differs in comparison to waveform similarity methods since clusters are detected even if they occur in large spatial distances. This approach is particularly helpful to analyse large moment tensor catalogues, as in microseismicity applications, where a manual analysis and classification is not feasible. A flexible algorithm is here proposed: it can handle different metrics, norms, and focal mechanism representations. In particular, the method can handle full moment tensor or constrained source model catalogues, for which different metrics are suggested. The method can account for variable uncertainties of different moment tensor components. We verify the method with synthetic catalogues. An application to real data from mining induced seismicity illustrates possible applications of the method and demonstrate the cluster detection and event classification performance with different moment tensor catalogues. Results proof that main earthquake source types occur on spatially separated faults, and that temporal changes in the number and characterization of focal mechanism clusters are detected. We suggest that moment tensor clustering can help assessing time dependent hazard in mines.
Knowledge of the origin of suspended sediment is important for improving our understanding of sediment dynamics and thereupon support of sustainable watershed management. An direct approach to trace the origin of sediments is the fingerprinting technique. It is based on the assumption that potential sediment sources can be discriminated and that the contribution of these sources to the sediment can be determined on the basis of distinctive characteristics (fingerprints). Recent studies indicate that visible-near-infrared (VNIR) and shortwave-infrared (SWIR) reflectance characteristics of soil may be a rapid, inexpensive alternative to traditional fingerprint properties (e.g. geochemistry or mineral magnetism).
To further explore the applicability of VNIR-SWIR spectral data for sediment tracing purposes, source samples were collected in the Isabena watershed, a 445 km(2) dryland catchment in the central Spanish Pyrenees. Grab samples of the upper soil layer were collected from the main potential sediment source types along with in situ reflectance spectra. Samples were dried and sieved, and artificial mixtures of known proportions were produced for algorithm validation. Then, spectral readings of potential source and artificial mixture samples were taken in the laboratory. Colour coefficients and physically based parameters were calculated from in situ and laboratory-measured spectra. All parameters passing a number of prerequisite tests were subsequently applied in discriminant function analysis for source discrimination and mixing model analyses for source contribution assessment.
The three source types (i.e. badlands, forest/grassland and an aggregation of other sources, including agricultural land, shrubland, unpaved roads and open slopes) could be reliably identified based on spectral parameters. Laboratory-measured spectral fingerprints permitted the quantification of source contribution to artificial mixtures, and introduction of source heterogeneity into the mixing model decreased accuracies for some source types. Aggregation of source types that could not be discriminated did not improve mixing model results. Despite providing similar discrimination accuracies as laboratory source parameters, in situ derived source information was found to be insufficient for contribution modelling.
The laboratory mixture experiment provides valuable insights into the capabilities and limitations of spectral fingerprint properties. From this study, we conclude that combinations of spectral properties can be used for mixing model analyses of a restricted number of source groups, whereas more straightforward in situ measured source parameters do not seem suitable. However, modelling results based on laboratory parameters also need to be interpreted with care and should not rely on the estimates of mean values only but should consider uncertainty intervals as well.
Many Mediterranean drylands are characterized by strong erosion in headwater catchments, where connectivity processes play an important role in the redistribution of water and sediments. Sediment connectivity describes the ease with which sediment can move through a catchment. The spatial and temporal characterization of connectivity patterns in a catchment enables the estimation of sediment contribution and transfer paths. Apart from topography, vegetation cover is one of the main factors driving sediment connectivity. This is particularly true for the patchy vegetation cover typical of many dryland environments. Several connectivity measures have been developed in the last few years. At the same time, advances in remote sensing have enabled an improved catchment-wide estimation of ground cover at the subpixel level using hyperspectral imagery.
The objective of this study was to assess the sediment connectivity for two adjacent subcatchments (similar to 70 km(2)) of the Isabena River in the Spanish Pyrenees in contrasting seasons using a quantitative connectivity index based on fractional vegetation cover and topography data. The fractional cover of green vegetation, non-photosynthetic vegetation, bare soil and rock were derived by applying a multiple endmember spectral mixture analysis approach to the hyperspectral image data. Sediment connectivity was mapped using the index of connectivity, in which the effect of land cover on runoff and sediment fluxes is expressed by a spatially distributed weighting factor. In this study, the cover and management factor (C factor) of the Revised Universal Soil Loss Equation (RUSLE) was used as a weighting factor. Bi-temporal C factor maps were derived by linking the spatially explicit fractional ground cover and vegetation height obtained from the airborne data to the variables of the RUSLE subfactors.
The resulting connectivity maps show that areas behave very differently with regard to connectivity, depending on the land cover and on the spatial distribution of vegetation abundances and topographic barriers. Most parts of the catchment show higher connectivity values in August as compared to April. The two subcatchments show a slightly different connectivity behaviour that reflects the different land cover proportions and their spatial configuration.
The connectivity estimation can support a better understanding of processes controlling the redistribution of water and sediments from the hillslopes to the channel network at a scale appropriate for land management. It allows hot spot areas of erosion to be identified and the effects of erosion control measures, as well as different land management scenarios, to be studied.
Response spectra are of fundamental importance in earthquake engineering and represent a standard measure in seismic design for the assessment of structural performance. However, unlike Fourier spectral amplitudes, the relationship of response spectral amplitudes to seismological source, path, and site characteristics is not immediately obvious and might even be considered counterintuitive for high oscillator frequencies. The understanding of this relationship is nevertheless important for seismic-hazard analysis. The purpose of the present study is to comprehensively characterize the variation of response spectral amplitudes due to perturbations of the causative seismological parameters. This is done by calculating the absolute parameter sensitivities (sensitivity coefficients) defined as the partial derivatives of the model output with respect to its input parameters. To derive sensitivities, we apply algorithmic differentiation (AD). This powerful approach is extensively used for sensitivity analysis of complex models in meteorology or aerodynamics. To the best of our knowledge, AD has not been explored yet in the seismic-hazard context. Within the present study, AD was successfully implemented for a proven and extensively applied simulation program for response spectra (Stochastic Method SIMulation [SMSIM]) using the TAPENADE AD tool. We assess the effects and importance of input parameter perturbations on the shape of response spectra for different regional stochastic models in a quantitative way. Additionally, we perform sensitivity analysis regarding adjustment issues of groundmotion prediction equations.