Refine
Year of publication
- 2004 (66) (remove)
Document Type
- Article (56)
- Doctoral Thesis (8)
- Monograph/Edited Volume (2)
Is part of the Bibliography
- yes (66)
Keywords
- Arava Fault (2)
- Arava-Störung (2)
- Dead Sea Transform (2)
- Klima (2)
- Totes Meer Störungssystem (2)
- climate (2)
- strike-slip fault (2)
- ASTER Satellitendaten (1)
- ASTER satellite images (1)
- Aeromagnetik (1)
Institute
- Institut für Geowissenschaften (66) (remove)
Quantitative Analysen magmatischer Gesteine mittels reflexionsspektroskopischer Infrarot-Messungen
(2004)
The results of this study clearly identify four key parameters controlling the estimation of probabilistic seismic hazard assessment (PSHA) in France in the framework of the Cornell-McGuire method. Results in terms of peak ground acceleration demonstrate the equally high impact, at all return periods, of the choice of truncation of the predicted ground-motion distribution (at + 2sigma) and of the choice between two different magnitude-intensity correlations. The choice of minimum magnitude (3.5/4.5) on hazard estimates can have an important impact at small return periods (<1000 years), whereas the maximum magnitude (6.5/7.0), on the other hand, is not a key parameter even at large return periods (10,000 years). This hierarchy of impacts is maintained at lower frequencies down to 5 Hz. Below 5 Hz, the choice of the maximum magnitude has a much greater impact, whereas the impact due to the choice of the minimum magnitude disappears. Moreover, variability due to catalog uncertainties is also quantified; these uncertainties that underly all hazard results can engender as high a variability as the controlling parameters. Parameter impacts, calculated at the centers of each source zone, show a linear trend with the seismicity models of the zone, demonstrating the lack of contributions coming from neighboring zones. Indeed, the region of influence that contributes to the PSHA estimate at a given site decreases with increasing return periods. The resulting overall variability in hazard estimates due to input uncertainties is quantified through a logic tree, obtained coefficients of variation vary between 10% and 20%. Until better physical models are obtained, the uncertainty on hazard estimates may be reduced by working on an appropriate magnitude-intensity correlation
Paleomagnetic dating of climatic events in Late Quaternary sediments of Lake Baikal (Siberia)
(2004)
Lake Baikal provides an excellent climatic archive for Central Eurasia as global climatic variations are continuously depicted in its sediments. We performed continuous rock magnetic and paleomagnetic analyses on hemipelagic sequences retrieved from 4 underwater highs reaching back 300 ka. The rock magnetic study combined with TEM, XRD, XRF and geochemical analyses evidenced that a magnetite of detrital origin dominates the magnetic signal in glacial sediments whereas interglacial sediments are affected by early diagenesis. HIRM roughly quantifies the hematite and goethite contributions and remains the best proxy for estimating the detrital input in Lake Baikal. Relative paleointensity records of the earth′s magnetic field show a reproducible pattern, which allows for correlation with well-dated reference curves and thus provides an alternative age model for Lake Baikal sediments. Using the paleomagnetic age model we observed that cooling in the Lake Baikal region and cooling of the sea surface water in the North Atlantic, as recorded in planktonic foraminifera δ18 O, are coeval. On the other hand, benthic δ18 O curves record mainly the global ice volume change, which occurs later than the sea surface temperature change. This proves that a dating bias results from an age model based on the correlation of Lake Baikal sedimentary records with benthic δ18 O curves. The compilation of paleomagnetic curves provides a new relative paleointensity curve, “Baikal 200”. With a laser-assisted grain size analysis of the detrital input, three facies types, reflecting different sedimentary dynamics can be distinguished. (1) Glacial periods are characterised by a high clay content mostly due to wind activity and by occurrence of a coarse fraction (sand) transported over the ice by local winds. This fraction gives evidence for aridity in the hinterland. (2) At glacial/interglacial transitions, the quantity of silt increases as the moisture increases, reflecting increased sedimentary dynamics. Wind transport and snow trapping are the dominant process bringing silt to a hemipelagic site (3) During the climatic optimum of the Eemian, the silt size and quantity are minimal due to blanketing of the detrital sources by the vegetal cover.
VLT on-axis optical spectroscopy of the z = 0.144 radio-loud quasar HE 1434-1600 is presented. The spatially resolved spectra of the host galaxy are deconvolved and separated from those of the central quasar in order to study the dynamics of the stars and gas as well as the physical conditions of the ISM. We find that the host of HE 1434-1600 is an elliptical galaxy that resides in a group of at least 5 member galaxies, and that most likely experienced a recent collision with its nearest companion. Compared with other quasar host galaxies, HE 1434-1600 has a highly ionized ISM. The ionization state corresponds to that of typical Seyferts, but the ionized regions are not distributed in a homogeneous way around the QSO, and are located preferentially several kiloparsecs away from it. While the stellar absorption lines do not show any significant velocity field, the gas emission lines do. The observed gas velocity field is hard to reconcile with dynamical models involving rotating disk. modified Hubble laws or power laws, that all require extreme central masses (M > 10(9) M-circle dot) to provide only poor fit to the data. Power law models, which best fit the data, provide a total mass of M(<10 kpc) = 9.2 x 10(10) M-&ODOT;. We conclude that the recent interaction between HE 1434-1600 and its closest companion has strongly affected the gas velocity and ionization state, from the center of the galaxy to its most external parts
The use of ground-motion-prediction equations to estimate ground shaking has become a very popular approach for seismic-hazard assessment, especially in the framework of a logic-tree approach. Owing to the large number of existing published ground-motion models, however, the selection and ranking of appropriate models for a particular target area often pose serious practical problems. Here we show how observed around-motion records can help to guide this process in a systematic and comprehensible way. A key element in this context is a new, likelihood based, goodness-of-fit measure that has the property not only to quantify the model fit but also to measure in some degree how well the underlying statistical model assumptions are met. By design, this measure naturally scales between 0 and 1, with a value of 0.5 for a situation in which the model perfectly matches the sample distribution both in terms of mean and standard deviation. We have used it in combination with other goodness-of-fit measures to derive a simple classification scheme to quantify how well a candidate ground-rnotion-prediction equation models a particular set of observed-response spectra. This scheme is demonstrated to perform well in recognizing a number of popular ground-motion models from their rock-site- recording, subsets. This indicates its potential for aiding the assignment of logic-tree weights in a consistent and reproducible way. We have applied our scheme to the border region of France, Germany, and Switzerland where the M-w 4.8 St. Die earthquake of 22 February 2003 in eastern France recently provided a small set of observed-response spectra. These records are best modeled by the ground-motion-prediction equation of Berge-Thierry et al. (2003), which is based on the analysis of predominantly European data. The fact that the Swiss model of Bay et al. (2003) is not able to model the observed records in an acceptable way may indicate general problems arising from the use of weak-motion data for strong-motion prediction
In recent years, H/V measurements have been increasingly used to map the thickness of sediment fill in sedimentary basins in the context of seismic hazard assessment. This parameter is believed to be an important proxy for the site effects in sedimentary basins (e.g. in the Los Angeles basin). Here we present the results of a test using this approach across an active normal fault in a structurally well known situation. Measurements on a 50 km long profile with 1 km station spacing clearly show a change in the frequency of the fundamental peak of H/V ratios with increasing thickness of the sediment layer in the eastern part of the Lower Rhine Embayment. Subsequently, a section of 10 km length across the Erft-Sprung system, a normal fault with ca. 750 m vertical offset, was measured with a station distance of 100 m. Frequencies of the first and second peaks and the first trough in the H/V spectra are used in a simple resonance model to estimate depths of the bedrock. While the frequency of the first peak shows a large scatter for sediment depths larger than ca. 500 m, the frequency of the first trough follows the changing thickness of the sediments across the fault. The lateral resolution is in the range of the station distance of 100 m. A power law for the depth dependence of the S-wave velocity derived from down hole measurements in an earlier study [Budny, 1984] and power laws inverted from dispersion analysis of micro array measurements [Scherbaum et al., 2002] agree with the results from the H/V ratios of this study
Seismic hazard evaluation is proposed by a methodological approach that allows the study of the influence of different modelling assumptions relative to the spatial and temporal distribution of earthquakes on the maximum values of expected intensities. In particular, we show that the estimated hazard at a fixed point is very sensitive to the assumed spatial distribution of epicentres and their estimators. As we will see, the usual approach, based on uniformly distributing the epicentres inside each seismogenic zone is likely to be biased towards lower expected intensity values. This will be made more precise later. Recall that the term "bias" means, that the expectation of the estimated quantity ( taken as a random variable on the space of statistics) is different from the expectation of the quantity itself. Instead, our approach, based on an estimator that takes into account the observed clustering of events is essentially unbiased, as shown by a Monte-Carlo simulation, and is configured on a 11011-isotropic macroseismic attenuation model which is independently estimated for each zone
One of the major challenges in engineering seismology is the reliable prediction of site-specific ground motion for particular earthquakes, observed at specific distances. For larger events, a special problem arises, at short distances, with the source-to-site distance measure, because distance metrics based on a point-source model are no longer appropriate. As a consequence, different attenuation relations differ in the distance metric that they use. In addition to being a source of confusion, this causes problems to quantitatively compare or combine different ground- motion models; for example, in the context of Probabilistic Seismic Hazard Assessment, in cases where ground-motion models with different distance metrics occupy neighboring branches of a logic tree. In such a situation, very crude assumptions about source sizes and orientations often have to be used to be able to derive an estimate of the particular metric required. Even if this solves the problem of providing a number to put into the attenuation relation, a serious problem remains. When converting distance measures, the corresponding uncertainties map onto the estimated ground motions according to the laws of error propagation. To make matters worse, conversion of distance metrics can cause the uncertainties of the adapted ground-motion model to become magnitude and distance dependent, even if they are not in the original relation. To be able to treat this problem quantitatively, the variability increase caused by the distance metric conversion has to be quantified. For this purpose, we have used well established scaling laws to determine explicit distance conversion relations using regression analysis on simulated data. We demonstrate that, for all practical purposes, most popular distance metrics can be related to the Joyner-Boore distance using models based on gamma distributions to express the shape of some "residual function." The functional forms are magnitude and distance dependent and are expressed as polynomials. We compare the performance of these relations with manually derived individual distance estimates for the Landers, the Imperial Valley, and the Chi-Chi earthquakes