Refine
Year of publication
Document Type
- Article (1071) (remove)
Keywords
- random point processes (18)
- statistical mechanics (18)
- stochastic analysis (18)
- data assimilation (8)
- Bayesian inference (6)
- discrepancy principle (5)
- ensemble Kalman filter (5)
- linear term (5)
- Data assimilation (4)
- Earthquake interaction (4)
Institute
- Institut für Mathematik (1071) (remove)
The knowledge of the largest expected earthquake magnitude in a region is one of the key issues in probabilistic seismic hazard calculations and the estimation of worst-case scenarios. Earthquake catalogues are the most informative source of information for the inference of earthquake magnitudes. We analysed the earthquake catalogue for Central Asia with respect to the largest expected magnitudes m(T) in a pre-defined time horizon T-f using a recently developed statistical methodology, extended by the explicit probabilistic consideration of magnitude errors. For this aim, we assumed broad error distributions for historical events, whereas the magnitudes of recently recorded instrumental earthquakes had smaller errors. The results indicate high probabilities for the occurrence of large events (M >= 8), even in short time intervals of a few decades. The expected magnitudes relative to the assumed maximum possible magnitude are generally higher for intermediate-depth earthquakes (51-300 km) than for shallow events (0-50 km). For long future time horizons, for example, a few hundred years, earthquakes with M >= 8.5 have to be taken into account, although, apart from the 1889 Chilik earthquake, it is probable that no such event occurred during the observation period of the catalogue.
We show how the maximum magnitude within a predefined future time horizon may be estimated from an earthquake catalog within the context of Gutenberg-Richter statistics. The aim is to carry out a rigorous uncertainty assessment, and calculate precise confidence intervals based on an imposed level of confidence a. In detail, we present a model for the estimation of the maximum magnitude to occur in a time interval T-f in the future, given a complete earthquake catalog for a time period T in the past and, if available, paleoseismic events. For this goal, we solely assume that earthquakes follow a stationary Poisson process in time with unknown productivity Lambda and obey the Gutenberg-Richter law in magnitude domain with unknown b-value. The random variables. and b are estimated by means of Bayes theorem with noninformative prior distributions. Results based on synthetic catalogs and on retrospective calculations of historic catalogs from the highly active area of Japan and the low-seismicity, but high-risk region lower Rhine embayment (LRE) in Germany indicate that the estimated magnitudes are close to the true values. Finally, we discuss whether the techniques can be extended to meet the safety requirements for critical facilities such as nuclear power plants. For this aim, the maximum magnitude for all times has to be considered. In agreement with earlier work, we find that this parameter is not a useful quantity from the viewpoint of statistical inference.
The injection of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking, and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection? This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude M-T of the maximum expected earthquake in a predefined future time window T-f. A case study of the fluid injection site at Paradox Valley, Colorado, demonstrates that the magnitude m 4.3 of the largest observed earthquake on 27 May 2000 lies very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within predefined confidence bounds.
The Groningen gas field serves as a natural laboratory for production-induced earthquakes, because no earthquakes were observed before the beginning of gas production. Increasing gas production rates resulted in growing earthquake activity and eventually in the occurrence of the 2012M(w) 3.6 Huizinge earthquake. At least since this event, a detailed seismic hazard and risk assessment including estimation of the maximum earthquake magnitude is considered to be necessary to decide on the future gas production. In this short note, we first apply state-of-the-art methods of mathematical statistics to derive confidence intervals for the maximum possible earthquake magnitude m(max). Second, we calculate the maximum expected magnitude M-T in the time between 2016 and 2024 for three assumed gas-production scenarios. Using broadly accepted physical assumptions and 90% confidence level, we suggest a value of m(max) 4.4, whereas M-T varies between 3.9 and 4.3, depending on the production scenario.
In the present study, we summarize and evaluate the endeavors from recent years to estimate the maximum possible earthquake magnitude m(max) from observed data. In particular, we use basic and physically motivated assumptions to identify best cases and worst cases in terms of lowest and highest degree of uncertainty of m(max). In a general framework, we demonstrate that earthquake data and earthquake proxy data recorded in a fault zone provide almost no information about m(max) unless reliable and homogeneous data of a long time interval, including several earthquakes with magnitude close to m(max), are available. Even if detailed earthquake information from some centuries including historic and paleoearthquakes are given, only very few, namely the largest events, will contribute at all to the estimation of m(max), and this results in unacceptably high uncertainties. As a consequence, estimators of m(max) in a fault zone, which are based solely on earthquake-related information from this region, have to be dismissed.
Based on an analysis of continuous monitoring of farm animal behavior in the region of the 2016 M6.6 Norcia earthquake in Italy, Wikelski et al., 2020; (Seismol Res Lett, 89, 2020, 1238) conclude that animal activity can be anticipated with subsequent seismic activity and that this finding might help to design a "short-term earthquake forecasting method." We show that this result is based on an incomplete analysis and misleading interpretations. Applying state-of-the-art methods of statistics, we demonstrate that the proposed anticipatory patterns cannot be distinguished from random patterns, and consequently, the observed anomalies in animal activity do not have any forecasting power.
We investigate spatio-temporal properties of earthquake patterns in the San Jacinto fault zone (SJFZ), California, between Cajon Pass and the Superstition Hill Fault, using a long record of simulated seismicity constrained by available seismological and geological data. The model provides an effective realization of a large segmented strike-slip fault zone in a 3D elastic half-space, with heterogeneous distribution of static friction chosen to represent several clear step-overs at the surface. The simulated synthetic catalog reproduces well the basic statistical features of the instrumental seismicity recorded at the SJFZ area since 1981. The model also produces events larger than those included in the short instrumental record, consistent with paleo-earthquakes documented at sites along the SJFZ for the last 1,400 years. The general agreement between the synthetic and observed data allows us to address with the long-simulated seismicity questions related to large earthquakes and expected seismic hazard. The interaction between m a parts per thousand yen 7 events on different sections of the SJFZ is found to be close to random. The hazard associated with m a parts per thousand yen 7 events on the SJFZ increases significantly if the long record of simulated seismicity is taken into account. The model simulations indicate that the recent increased number of observed intermediate SJFZ earthquakes is a robust statistical feature heralding the occurrence of m a parts per thousand yen 7 earthquakes. The hypocenters of the m a parts per thousand yen 5 events in the simulation results move progressively towards the hypocenter of the upcoming m a parts per thousand yen 7 earthquake.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
Paleoearthquakes and historic earthquakes are the most important source of information for the estimation of long-term earthquake recurrence intervals in fault zones, because corresponding sequences cover more than one seismic cycle. However, these events are often rare, dating uncertainties are enormous, and missing or misinterpreted events lead to additional problems. In the present study, I assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a clock change model. Mathematically, this leads to a Brownian passage time distribution for recurrence intervals. I take advantage of an earlier finding that under certain assumptions the aperiodicity of this distribution can be related to the Gutenberg-Richter b value, which can be estimated easily from instrumental seismicity in the region under consideration. In this way, both parameters of the Brownian passage time distribution can be attributed with accessible seismological quantities. This allows to reduce the uncertainties in the estimation of the mean recurrence interval, especially for short paleoearthquake sequences and high dating errors. Using a Bayesian framework for parameter estimation results in a statistical model for earthquake recurrence intervals that assimilates in a simple way paleoearthquake sequences and instrumental data. I present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times based on a stationary Poisson process.
Convergence of the frequency-magnitude distribution of global earthquakes - maybe in 200 years
(2013)
I study the ability to estimate the tail of the frequency-magnitude distribution of global earthquakes. While power-law scaling for small earthquakes is accepted by support of data, the tail remains speculative. In a recent study, Bell et al. (2013) claim that the frequency-magnitude distribution of global earthquakes converges to a tapered Pareto distribution. I show that this finding results from data fitting errors, namely from the biased maximum likelihood estimation of the corner magnitude theta in strongly undersampled models. In particular, the estimation of theta depends solely on the few largest events in the catalog. Taking this into account, I compare various state-of-the-art models for the global frequency-magnitude distribution. After discarding undersampled models, the remaining ones, including the unbounded Gutenberg-Richter distribution, perform all equally well and are, therefore, indistinguishable. Convergence to a specific distribution, if it ever takes place, requires about 200 years homogeneous recording of global seismicity, at least.
The paper studies catalytic super-Brownian motion on the real line, where the branching rate is controlled by a catalyst. D. A. Dawson, K. Fleischmann and S. Roelly showed, for a broad class of catalysts, that, as for constant branching, the processes are absolutely continuous measures. This paper considers a class of catalysts, called moderate, which must satisfy a uniform boundedness condition and a condition controlling the degree of singularity---essentially that the mass of catalyst in small balls should (uniformly) be of order r^a, where a>0. The main result of this paper shows that for this class of catalysts there is a continuous density field for the process. Moreover the density is the unique solution (in law) of an appropriate SPDE.
The author considers the heat equation in dimension one with singular drift and inhomogeneous space-time white noise. In particular, the quadratic variation measure of the white noise is not required to be absolutely continuous w.r.t. the Lebesgue measure, neither in space nor in time. Under some assumptions the author gives statements on strong and weak existence as well as strong and weak uniqueness of continuous solutions.
The morphological features in the deviations of the total electron content (TEC) of the ionosphere from the background undisturbed state as possible precursors of the earthquake of January 12, 2010 (21:53 UT (16:53 LT), 18.46A degrees N, 72.5A degrees W, 7.0 M) in Haiti are analyzed. To identify these features, global and regional differential TEC maps based on global 2-h TEC maps provided by NASA in the IONEX format were plotted. For the considered earthquake, long-lived disturbances, presumably of seismic origin, were localized in the near-epicenter area and were accompanied by similar effects in the magnetoconjugate region. Both decreases and increases in the local TEC over the period from 22 UT of January 10 to 08 UT of January 12, 2010 were observed. The horizontal dimensions of the anomalies were similar to 40A degrees in longitude and similar to 20A degrees in latitude, with the magnitude of TEC disturbances reaching similar to 40% relative to the background near the epicenter and more than 50% in the magnetoconjugate area. No significant geomagnetic disturbances within January 1-12, 2010 were observed, i.e., the detected TEC anomalies were manifestations of interplay between processes in the lithosphere-atmosphere-ionosphere system.
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses diagonalization with a general, non- orthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm's efficiency stems from the special approximation of the cost function resulting in a sparse, block-diagonal Hessian to be used in the computation of the quasi-Newton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing state-of-the-art joint diagonalization algorithms. The practical use of our algorithm is shown for blind source separation problems
We discuss the role of gravitational excitons/radions in different cosmological scenarios. Gravitational excitons are massive moduli fields which describe conformal excitations of the internal spaces and which, due to their Planck-scale suppressed coupling to matter fields, are WIMPs. It is demonstrated that, depending on the concrete scenario, observational cosmological data set strong restrictions on the allowed masses and initial oscillation amplitudes of these particles
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
We explore fluctuations of the horizontal component of the Earth's magnetic field to identify scaling behaviour of the temporal variability in geomagnetic data recorded by the Intermagnet observatories during the solar cycle 23 (years 1996 to 2005). In this work, we use the remarkable ability of scaling wavelet exponents to highlight the singularities associated with discontinuities present in the magnetograms obtained at two magnetic observatories for six intense magnetic storms, including the sudden storm commencements of 14 July 2000, 29-31 October and 20-21 November 2003. In the active intervals that occurred during geomagnetic storms, we observe a rapid and unidirectional change in the spectral scaling exponent at the time of storm onset. The corresponding fractal features suggest that the dynamics of the whole time series is similar to that of a fractional Brownian motion. Our findings point to an evident relatively sudden change related to the emergence of persistency of the fractal power exponent fluctuations precedes an intense magnetic storm. These first results could be useful in the framework of extreme events prediction studies.
We study the global singularity structure of solutions to 3-D semilinear wave equations with discontinuous initial data. More precisely, using Strichartz' inequality we show that the solutions stay conormal after nonlinear interaction if the Cauchy data are conormal along a circle. (C) 2003 Elsevier Inc. All rights reserved
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering.
However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood.
In this study, we investigate moderate but damaging earthquakes (M-w 6-7) and characterize ground- motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects.
We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the freq uency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes.
We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations.
Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect.
We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances.
Relationship between large-scale ionospheric field-aligned currents and electron/ion precipitations
(2020)
In this study, we have derived field-aligned currents (FACs) from magnetometers onboard the Defense Meteorological Satellite Project (DMSP) satellites. The magnetic latitude versus local time distribution of FACs from DMSP shows comparable dependences with previous findings on the intensity and orientation of interplanetary magnetic field (IMF)B(y)andB(z)components, which confirms the reliability of DMSP FAC data set. With simultaneous measurements of precipitating particles from DMSP, we further investigate the relation between large-scale FACs and precipitating particles. Our result shows that precipitation electron and ion fluxes both increase in magnitude and extend to lower latitude for enhanced southward IMFBz, which is similar to the behavior of FACs. Under weak northward and southwardB(z)conditions, the locations of the R2 current maxima, at both dusk and dawn sides and in both hemispheres, are found to be close to the maxima of the particle energy fluxes; while for the same IMF conditions, R1 currents are displaced further to the respective particle flux peaks. Largest displacement (about 3.5 degrees) is found between the downward R1 current and ion flux peak at the dawn side. Our results suggest that there exists systematic differences in locations of electron/ion precipitation and large-scale upward/downward FACs. As outlined by the statistical mean of these two parameters, the FAC peaks enclose the particle energy flux peaks in an auroral band at both dusk and dawn sides. Our comparisons also found that particle precipitation at dawn and dusk and in both hemispheres maximizes near the mean R2 current peaks. The particle precipitation flux maxima closer to the R1 current peaks are lower in magnitude. This is opposite to the known feature that R1 currents are on average stronger than R2 currents.
Diffusion maps is a manifold learning algorithm widely used for dimensionality reduction. Using a sample from a distribution, it approximates the eigenvalues and eigenfunctions of associated Laplace-Beltrami operators. Theoretical bounds on the approximation error are, however, generally much weaker than the rates that are seen in practice. This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus. For the data sampling (variance) component of the error we make spatially localized compact embedding estimates on certain Hardy spaces; we study the deterministic (bias) component as a perturbation of the Laplace-Beltrami operator's associated PDE and apply relevant spectral stability results. Using these approaches, we match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretization. We also introduce an alternative normalization for diffusion maps based on Sinkhorn weights. This normalization approximates a Langevin diffusion on the sample and yields a symmetric operator approximation. We prove that it has better convergence compared with the standard normalization on flat domains, and we present a highly efficient rigorous algorithm to compute the Sinkhorn weights.
Local asymptotic types
(2004)
Concurrent observation technologies have made high-precision real-time data available in large quantities. Data assimilation (DA) is concerned with how to combine this data with physical models to produce accurate predictions. For spatial-temporal models, the ensemble Kalman filter with proper localisation techniques is considered to be a state-of-the-art DA methodology. This article proposes and investigates a localised ensemble Kalman Bucy filter for nonlinear models with short-range interactions. We derive dimension-independent and component-wise error bounds and show the long time path-wise error only has logarithmic dependence on the time range. The theoretical results are verified through some simple numerical tests.
Estimability in Cox models
(2016)
Our procedure of estimating is the maximum partial likelihood estimate (MPLE) which is the appropriate estimate in the Cox model with a general censoring distribution, covariates and an unknown baseline hazard rate . We find conditions for estimability and asymptotic estimability. The asymptotic variance matrix of the MPLE is represented and properties are discussed.
Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
(2010)
Background: In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results: The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well- known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions: While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
We propose a novel strategy for global sensitivity analysis of ordinary differential equations. It is based on an error-controlled solution of the partial differential equation (PDE) that describes the evolution of the probability density function associated with the input uncertainty/variability. The density yields a more accurate estimate of the output uncertainty/variability, where not only some observables (such as mean and variance) but also structural properties (e.g., skewness, heavy tails, bi-modality) can be resolved up to a selected accuracy. For the adaptive solution of the PDE Cauchy problem we use the Rothe method with multiplicative error correction, which was originally developed for the solution of parabolic PDEs. We show that, unlike in parabolic problems, conservation properties necessitate a coupling of temporal and spatial accuracy to avoid accumulation of spatial approximation errors over time. We provide convergence conditions for the numerical scheme and suggest an implementation using approximate approximations for spatial discretization to efficiently resolve the coupling of temporal and spatial accuracy. The performance of the method is studied by means of low-dimensional case studies. The favorable properties of the spatial discretization technique suggest that this may be the starting point for an error-controlled sensitivity analysis in higher dimensions.
The drug concentrations targeted in meropenem and piperacillin/tazobactam therapy also depend on the susceptibility of the pathogen. Yet, the pathogen is often unknown, and antibiotic therapy is guided by empirical targets. To reliably achieve the targeted concentrations, dosing needs to be adjusted for renal function. We aimed to evaluate a meropenem and piperacillin/tazobactam monitoring program in intensive care unit (ICU) patients by assessing (i) the adequacy of locally selected empirical targets, (ii) if dosing is adequately adjusted for renal function and individual target, and (iii) if dosing is adjusted in target attainment (TA) failure. In a prospective, observational clinical trial of drug concentrations, relevant patient characteristics and microbiological data (pathogen, minimum inhibitory concentration (MIC)) for patients receiving meropenem or piperacillin/tazobactam treatment were collected. If the MIC value was available, a target range of 1-5 x MIC was selected for minimum drug concentrations of both drugs. If the MIC value was not available, 8-40 mg/L and 16-80 mg/L were selected as empirical target ranges for meropenem and piperacillin, respectively. A total of 356 meropenem and 216 piperacillin samples were collected from 108 and 96 ICU patients, respectively. The vast majority of observed MIC values was lower than the empirical target (meropenem: 90.0%, piperacillin: 93.9%), suggesting empirical target value reductions. TA was found to be low (meropenem: 35.7%, piperacillin 50.5%) with the lowest TA for severely impaired renal function (meropenem: 13.9%, piperacillin: 29.2%), and observed drug concentrations did not significantly differ between patients with different targets, indicating dosing was not adequately adjusted for renal function or target. Dosing adjustments were rare for both drugs (meropenem: 6.13%, piperacillin: 4.78%) and for meropenem irrespective of TA, revealing that concentration monitoring alone was insufficient to guide dosing adjustment. Empirical targets should regularly be assessed and adjusted based on local susceptibility data. To improve TA, scientific knowledge should be translated into easy-to-use dosing strategies guiding antibiotic dosing.
Vorlesungs-Pflege
(2018)
Ähnlich zu Alterungsprozessen bei Software degenerieren auch Vorlesungen, wenn sie nicht hinreichend gepflegt werden. Die Gründe hierfür werden ebenso beleuchtet wie mögliche Indikatoren und Maßnahmen – der Blick ist dabei immer der eines Informatikers. An drei Vorlesungen wird erläutert, wie der Degeneration von Lehrveranstaltungen
gegengewirkt werden kann. Mangels hinreichend großer empirischer Daten liefert das Paper keine unumstößlichen Wahrheiten. Ein Ziel ist es vielmehr Kollegen, die ähnliche Phänomene beobachten, einen ersten Anker für einen
inneren Diskurs zu bieten. Ein langfristiges Ziel ist die Sammlung eines Katalogs an Maßnahmen zur Pflege von Informatikvorlesungen.
Stress drop is a key factor in earthquake mechanics and engineering seismology. However, stress drop calculations based on fault slip can be significantly biased, particularly due to subjectively determined smoothing conditions in the traditional least-square slip inversion. In this study, we introduce a mechanically constrained Bayesian approach to simultaneously invert for fault slip and stress drop based on geodetic measurements. A Gaussian distribution for stress drop is implemented in the inversion as a prior. We have done several synthetic tests to evaluate the stability and reliability of the inversion approach, considering different fault discretization, fault geometries, utilized datasets, and variability of the slip direction, respectively. We finally apply the approach to the 2010 M8.8 Maule earthquake and invert for the coseismic slip and stress drop simultaneously. Two fault geometries from the literature are tested. Our results indicate that the derived slip models based on both fault geometries are similar, showing major slip north of the hypocenter and relatively weak slip in the south, as indicated in the slip models of other studies. The derived mean stress drop is 5-6 MPa, which is close to the stress drop of similar to 7 MPa that was independently determined according to force balance in this region Luttrell et al. (J Geophys Res, 2011). These findings indicate that stress drop values can be consistently extracted from geodetic data.
Both aftershocks and geodetically measured postseismic displacements are important markers of the stress relaxation process following large earthquakes. Postseismic displacements can be related to creep-like relaxation in the vicinity of the coseismic rupture by means of inversion methods. However, the results of slip inversions are typically non-unique and subject to large uncertainties. Therefore, we explore the possibility to improve inversions by mechanical constraints. In particular, we take into account the physical understanding that postseismic deformation is stress-driven, and occurs in the coseismically stressed zone. We do joint inversions for coseismic and postseismic slip in a Bayesian framework in the case of the 2004 M6.0 Parkfield earthquake. We perform a number of inversions with different constraints, and calculate their statistical significance. According to information criteria, the best result is preferably related to a physically reasonable model constrained by the stress-condition (namely postseismic creep is driven by coseismic stress) and the condition that coseismic slip and large aftershocks are disjunct. This model explains 97% of the coseismic displacements and 91% of the postseismic displacements during day 1-5 following the Parkfield event, respectively. It indicates that the major postseismic deformation can be generally explained by a stress relaxation process for the Parkfield case. This result also indicates that the data to constrain the coseismic slip model could be enriched postseismically. For the 2004 Parkfield event, we additionally observe asymmetric relaxation process at the two sides of the fault, which can be explained by material contrast ratio across the fault of similar to 1.15 in seismic velocity.
Background/Aims: Angiogenesis plays a key role during embryonic development. The vascular endothelin (ET) system is involved in the regulation of angiogenesis. Lipopolysaccharides (LPS) could induce angiogenesis. The effects of ET blockers on baseline and LPS-stimulated angiogenesis during embryonic development remain unknown so far. Methods: The blood vessel density (BVD) of chorioallantoic membranes (CAMs), which were treated with saline (control), LPS, and/or BQ123 and the ETB blocker BQ788, were quantified and analyzed using an IPP 6.0 image analysis program. Moreover, the expressions of ET-1, ET-2, ET3, ET receptor A (ETRA), ET receptor B (ETRB) and VEGFR2 mRNA during embryogenesis were analyzed by semi-quantitative RT-PCR. Results: All components of the ET system are detectable during chicken embryogenesis. LPS increased angiogenesis substantially. This process was completely blocked by the treatment of a combination of the ETA receptor blockers-BQ123 and the ETB receptor blocker BQ788. This effect was accompanied by a decrease in ETRA, ETRB, and VEGFR2 gene expression. However, the baseline angiogenesis was not affected by combined ETA/ETB receptor blockade. Conclusion: During chicken embryogenesis, the LPS-stimulated angiogenesis, but not baseline angiogenesis, is sensitive to combined ETA/ETB receptor blockade.
In a recent paper, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
We consider quasicomplexes of pseudodifferential operators on a smooth compact manifold without boundary. To each quasicomplex we associate a complex of symbols. The quasicomplex is elliptic if this symbol complex is exact away from the zero section. We prove that elliptic quasicomplexes are Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes and prove a generalisation of the Atiyah-Singer index theorem.
The human immunodeficiency virus (HIV) can be suppressed by highly active anti-retroviral therapy (HAART) in the majority of infected patients. Nevertheless, treatment interruptions inevitably result in viral rebounds from persistent, latently infected cells, necessitating lifelong treatment. Virological failure due to resistance development is a frequent event and the major threat to treatment success. Currently, it is recommended to change treatment after the confirmation of virological failure. However, at the moment virological failure is detected, drug resistant mutants already replicate in great numbers. They infect numerous cells, many of which will turn into latently infected cells. This pool of cells represents an archive of resistance, which has the potential of limiting future treatment options. The objective of this study was to design a treatment strategy for treatment-naive patients that decreases the likelihood of early treatment failure and preserves future treatment options. We propose to apply a single, pro-active treatment switch, following a period of treatment with an induction regimen. The main goal of the induction regimen is to decrease the abundance of randomly generated mutants that confer resistance to the maintenance regimen, thereby increasing subsequent treatment success. Treatment is switched before the overgrowth and archiving of mutant strains that carry resistance against the induction regimen and would limit its future re-use. In silico modelling shows that an optimal trade-off is achieved by switching treatment at & 80 days after the initiation of antiviral therapy. Evaluation of the proposed treatment strategy demonstrated significant improvements in terms of resistance archiving and virological response, as compared to conventional HAART. While continuous pro-active treatment alternation improved the clinical outcome in a randomized trial, our results indicate that a similar improvement might also be reached after a single pro-active treatment switch. The clinical validity of this finding, however, remains to be shown by a corresponding trial.
ShapeRotator
(2018)
The quantification of complex morphological patterns typically involves comprehensive shape and size analyses, usually obtained by gathering morphological data from all the structures that capture the phenotypic diversity of an organism or object. Articulated structures are a critical component of overall phenotypic diversity, but data gathered from these structures are difficult to incorporate into modern analyses because of the complexities associated with jointly quantifying 3D shape in multiple structures. While there are existing methods for analyzing shape variation in articulated structures in two-dimensional (2D) space, these methods do not work in 3D, a rapidly growing area of capability and research. Here, we describe a simple geometric rigid rotation approach that removes the effect of random translation and rotation, enabling the morphological analysis of 3D articulated structures. Our method is based on Cartesian coordinates in 3D space, so it can be applied to any morphometric problem that also uses 3D coordinates (e.g., spherical harmonics). We demonstrate the method by applying it to a landmark-based dataset for analyzing shape variation using geometric morphometrics. We have developed an R tool (ShapeRotator) so that the method can be easily implemented in the commonly used R package geomorph and MorphoJ software. This method will be a valuable tool for 3D morphological analyses in articulated structures by allowing an exhaustive examination of shape and size diversity.