Refine
Year of publication
Document Type
- Article (66)
- Other (9)
- Postprint (2)
- Monograph/Edited Volume (1)
Keywords
- Seismologie (6)
- Seismology (6)
- Body waves (5)
- Earthquake source observations (5)
- Erdbeben (5)
- Seismicity and tectonics (5)
- Theoretical seismology (4)
- Array Seismology (3)
- Arrayseismologie (3)
- Crustal structure (3)
Institute
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger
compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Studies of seismic tomography have been highly successful at imaging the deep structure of subduction zones. In a study complementary to these tomographic studies, we use array seismology and reflected waves to image a stagnant slab in the mantle transition zone. Using P and S (SH) waves we find a steeply dipping reflector centred at ca. 400 km depth and ca. 550 km west of the present Mariana subduction zone (at 20N, 140E). The discovery of this anomaly in tomography and independently in array seismology (this paper) helps in understanding the evolution of the Mariana margin. The reflector/stagnant slab may be the remains of the hypothetical North New Guinea Plate, which was theorized to have subducted ca. 50 Ma.
MALDI-TOF-MS-based identification of monoclonal murine Anti-SARS-CoV-2 antibodies within one hour
(2022)
During the SARS-CoV-2 pandemic, many virus-binding monoclonal antibodies have been developed for clinical and diagnostic purposes. This underlines the importance of antibodies as universal bioanalytical reagents. However, little attention is given to the reproducibility crisis that scientific studies are still facing to date. In a recent study, not even half of all research antibodies mentioned in publications could be identified at all. This should spark more efforts in the search for practical solutions for the traceability of antibodies. For this purpose, we used 35 monoclonal antibodies against SARS-CoV-2 to demonstrate how sequence-independent antibody identification can be achieved by simple means applied to the protein. First, we examined the intact and light chain masses of the antibodies relative to the reference material NIST-mAb 8671. Already half of the antibodies could be identified based solely on these two parameters. In addition, we developed two complementary peptide mass fingerprinting methods with MALDI-TOF-MS that can be performed in 60 min and had a combined sequence coverage of over 80%. One method is based on the partial acidic hydrolysis of the protein by 5 mM of sulfuric acid at 99 degrees C. Furthermore, we established a fast way for a tryptic digest without an alkylation step. We were able to show that the distinction of clones is possible simply by a brief visual comparison of the mass spectra. In this work, two clones originating from the same immunization gave the same fingerprints. Later, a hybridoma sequencing confirmed the sequence identity of these sister clones. In order to automate the spectral comparison for larger libraries of antibodies, we developed the online software ABID 2.0. This open-source software determines the number of matching peptides in the fingerprint spectra. We propose that publications and other documents critically relying on monoclonal antibodies with unknown amino acid sequences should include at least one antibody fingerprint. By fingerprinting an antibody in question, its identity can be confirmed by comparison with a library spectrum at any time and context.
Earthquakes often rupture across more than one fault segment. If such rupture segmentation occurs on a significant scale, a simple point-source or one-fault model may not represent the rupture process well. As a consequence earthquake characteristics inferred, based on one-source assumptions, may become systematically wrong. This might have effects on follow-up analyses, for example regional stress field inversions and seismic hazard assessments. While rupture segmentation is evident for most M-w > 7 earthquakes, also smaller ones with 5.5 < M-w < 7 can be segmented. We investigate the sensitivity of globally available data sets to rupture segmentation and their resolution to reliably estimate the mechanisms in presence of segmentation. We focus on the sensitivity of InSAR (Interferometric Synthetic Aperture Radar) data in the static near-field and seismic waveforms in the far-field of the rupture and carry out non-linear and Bayesian optimizations of single-source and two-sources kinematic models (double-couple point sources and finite, rectangular sources) using InSAR and teleseismic waveforms separately. Our case studies comprises of four M-w 6-7 earthquakes: the 2009 L'Aquila and 2016 Amatrice (Italy) and the 2005 and 2008 Zhongba (Tibet) earthquakes. We contrast the data misfits of different source complexity by using the Akaike informational criterion (AIC). We find that the AIC method is well suited for data-driven inferences on significant rupture segmentation for the given data sets. This is based on our observation that an AIC-stated significant improvement of data fit for two-segment models over one-segment models correlates with significantly different mechanisms of the two source segments and their average compared to the single-segment mechanism. We attribute these modelled differences to a sufficient sensitivity of the data to resolve rupture segmentation. Our results show that near-field data are generally more sensitive to rupture segmentation of shallow earthquakes than far-field data but that also teleseismic data can resolve rupture segmentation in the studied magnitude range. We further conclude that a significant difference in the modelled source mechanisms for different segmentations shows that an appropriate choice of model segmentation matters for a robust estimation of source mechanisms. It reduces systematic biases and trade-off and thereby improves the knowledge on the rupture. Our study presents a strategy and method to detect significant rupture segmentation such that an appropriate model complexity can be used in the source mechanism inference. A similar, systematic investigation of earthquakes in the range of M-w 5.5-7 could provide important hazard-relevant statistics on rupture segmentation. In these cases single-source models introduce a systematic bias. Consideration of rupture segmentation therefore matters for a robust estimation of source mechanisms of the studied earthquakes.
This paper focuses on tenuous dust clouds of Jupiter's Galilean moons Europa, Ganymede and Callisto. In a companion paper (Sremcevic et al., Planet. Space Sci. 51 (2003) 455-471) an analytical model of impact-generated ejecta dust clouds surrounding planetary satellites has been developed. The main aim of the model is to predict the asymmetries in the dust clouds which may arise from the orbital motion of the parent body through a field of impactors. The Galileo dust detector data from flybys at Europa, Ganymede and Callisto are compatible with the model, assuming projectiles to be interplanetary micrometeoroids. The analysis of the data suggests that two interplanetary impactor populations are most likely the source of the measured dust clouds: impactors with isotropically distributed velocities and micrometeoroids in retrograde orbits. Other impactor populations, namely those originating in the Jovian system, or interplanetary projectiles with low orbital eccentricities and inclinations, or interstellar stream particles, can be ruled out by the statistical analysis of the data. The data analysis also suggests that the mean ejecta velocity angle to the normal at the satellite surface is around 30°, which is in agreement with laboratory studies of the hypervelocity impacts. © 2004 Elsevier Ltd. All rights reserved
In July 2004 the Cassini–Huygens mission reached the Saturnian system and started its orbital tour. A total of 75 orbits will be carried out during the primary mission until August 2008. In these four years Cassini crosses the ring plane 150 times and spends approx. 400 h within Titan's orbit. The Cosmic Dust Analyser (CDA) onboard Cassini characterises the dust environment with its extended E ring and embedded moons. Here, we focus on the CDA results of the first year and we present the Dust Analyser (DA) data within Titan's orbit. This paper does investigate High Rate Detector data and dust composition measurements. The authors focus on the analysis of impact rates, which were strongly variable primarily due to changes of the spacecraft pointing. An overview is given about the ring plane crossings and the DA counter measurements. The DA dust impact rates are compared with the DA boresight configuration around all ring plane crossings between June 2004 and July 2005. Dust impacts were registered at altitudes as high as 100 000 km above the ring plane at distances from Saturn between 4 and 10 Saturn radii. In those regions the dust density of particles bigger than 0.5 can reach values of 0.001m-3.
The Cassini-Huygens Cosmic Dust Analyzer (CDA) is intended to provide direct observations of dust grains with masses between 10(-19) and 10(-9) kg in interplanetary space and in the jovian and saturnian systems, to investigate their physical, chemical and dynamical properties as functions of the distances to the Sun, to Jupiter and to Saturn and its satellites and rings, to study their interaction with the saturnian rings, satellites and magnetosphere. Chemical composition of interplanetary meteoroids will be compared with asteroidal and cometary dust, as well as with Saturn dust, ejecta from rings and satellites. Ring and satellites phenomena which might be effects of meteoroid impacts will be compared with the interplanetary dust environment. Electrical charges of particulate matter in the magnetosphere and its consequences will be studied, e.g. the effects of the ambient plasma and the magnetic held on the trajectories of dust particles as well as fragmentation of particles due to electrostatic disruption. The investigation will be performed with an instrument that measures the mass, composition, electric charge, speed, and flight direction of individual dust particles. It is a highly reliable and versatile instrument with a mass sensitivity 106 times higher than that of the Pioneer 10 and I I dust detectors which measured dust in the saturnian system. The Cosmic Dust Analyzer has significant inheritance from former space instrumentation developed for the VEGA, Giotto, Galileo, and Ulysses missions. It will reliably measure impacts from as low as I impact per month up to 104 impacts per second. The instrument weighs 17 kg and consumes 12 W, the integrated time-of-flight mass spectrometer has a mass resolution of up to 50. The nominal data transmission rate is 524 bits/s and varies between 50 and 4192 bps