Refine
Year of publication
- 2020 (90) (remove)
Document Type
- Article (75)
- Postprint (5)
- Doctoral Thesis (4)
- Conference Proceeding (2)
- Master's Thesis (2)
- Monograph/Edited Volume (1)
- Part of a Book (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- data assimilation (3)
- 26D15 (2)
- 31C20 (2)
- 35B09 (2)
- 35R02 (2)
- 39A12 (primary) (2)
- 58E35 (secondary) (2)
Institute
- Institut für Mathematik (90) (remove)
Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
(2020)
In this paper, we consider the nonlinear ill-posed inverse problem with noisy data in the statistical learning setting. The Tikhonov regularization scheme in Hilbert scales is considered to reconstruct the estimator from the random noisy data. In this statistical learning setting, we derive the rates of convergence for the regularized solution under certain assumptions on the nonlinear forward operator and the prior assumptions. We discuss estimates of the reconstruction error using the approach of reproducing kernel Hilbert spaces.
Thermophysical modelling and parameter estimation of small solar system bodies via data assimilation
(2020)
Deriving thermophysical properties such as thermal inertia from thermal infrared observations provides useful insights into the structure of the surface material on planetary bodies. The estimation of these properties is usually done by fitting temperature variations calculated by thermophysical models to infrared observations. For multiple free model parameters, traditional methods such as least-squares fitting or Markov chain Monte Carlo methods become computationally too expensive. Consequently, the simultaneous estimation of several thermophysical parameters, together with their corresponding uncertainties and correlations, is often not computationally feasible and the analysis is usually reduced to fitting one or two parameters. Data assimilation (DA) methods have been shown to be robust while sufficiently accurate and computationally affordable even for a large number of parameters. This paper will introduce a standard sequential DA method, the ensemble square root filter, for thermophysical modelling of asteroid surfaces. This method is used to re-analyse infrared observations of the MARA instrument, which measured the diurnal temperature variation of a single boulder on the surface of near-Earth asteroid (162173) Ryugu. The thermal inertia is estimated to be 295 +/- 18 Jm(-2) K-1 s(-1/2), while all five free parameters of the initial analysis are varied and estimated simultaneously. Based on this thermal inertia estimate the thermal conductivity of the boulder is estimated to be between 0.07 and 0.12,Wm(-1) K-1 and the porosity to be between 0.30 and 0.52. For the first time in thermophysical parameter derivation, correlations and uncertainties of all free model parameters are incorporated in the estimation procedure that is more than 5000 times more efficient than a comparable parameter sweep.
Purpose This review provides an overview of the current challenges in oral targeted antineoplastic drug (OAD) dosing and outlines the unexploited value of therapeutic drug monitoring (TDM). Factors influencing the pharmacokinetic exposure in OAD therapy are depicted together with an overview of different TDM approaches. Finally, current evidence for TDM for all approved OADs is reviewed. Methods A comprehensive literature search (covering literature published until April 2020), including primary and secondary scientific literature on pharmacokinetics and dose individualisation strategies for OADs, together with US FDA Clinical Pharmacology and Biopharmaceutics Reviews and the Committee for Medicinal Products for Human Use European Public Assessment Reports was conducted. Results OADs are highly potent drugs, which have substantially changed treatment options for cancer patients. Nevertheless, high pharmacokinetic variability and low treatment adherence are risk factors for treatment failure. TDM is a powerful tool to individualise drug dosing, ensure drug concentrations within the therapeutic window and increase treatment success rates. After reviewing the literature for 71 approved OADs, we show that exposure-response and/or exposure-toxicity relationships have been established for the majority. Moreover, TDM has been proven to be feasible for individualised dosing of abiraterone, everolimus, imatinib, pazopanib, sunitinib and tamoxifen in prospective studies. There is a lack of experience in how to best implement TDM as part of clinical routine in OAD cancer therapy. Conclusion Sub-therapeutic concentrations and severe adverse events are current challenges in OAD treatment, which can both be addressed by the application of TDM-guided dosing, ensuring concentrations within the therapeutic window.
We present a new model of the geomagnetic field spanning the last 20 years and called Kalmag. Deriving from the assimilation of CHAMP and Swarm vector field measurements, it separates the different contributions to the observable field through parameterized prior covariance matrices. To make the inverse problem numerically feasible, it has been sequentialized in time through the combination of a Kalman filter and a smoothing algorithm. The model provides reliable estimates of past, present and future mean fields and associated uncertainties. The version presented here is an update of our IGRF candidates; the amount of assimilated data has been doubled and the considered time window has been extended from [2000.5, 2019.74] to [2000.5, 2020.33].
When trying to cast the free fermion in the framework of functorial field theory, its chiral anomaly manifests in the fact that it assigns the determinant of the Dirac operator to a top-dimensional closed spin manifold, which is not a number as expected, but an element of a complex line. In functorial field theory language, this means that the theory is twisted, which gives rise to an anomaly theory. In this paper, we give a detailed construction of this anomaly theory, as a functor that sends manifolds to infinite-dimensional Clifford algebras and bordisms to bimodules.
The canonical trace and the Wodzicki residue on classical pseudo-differential operators on a closed manifold are characterised by their locality and shown to be preserved under lifting to the universal covering as a result of their local feature. As a consequence, we lift a class of spectral zeta-invariants using lifted defect formulae which express discrepancies of zeta-regularised traces in terms of Wodzicki residues. We derive Atiyah's L-2-index theorem as an instance of the Z(2)-graded generalisation of the canonical lift of spectral zeta-invariants and we show that certain lifted spectral zeta-invariants for geometric operators are integrals of Pontryagin and Chern forms.
This work provides a necessary and sufficient condition for a symbolic dynamical system to admit a sequence of periodic approximations in the Hausdorff topology. The key result proved and applied here uses graphs that are called De Bruijn graphs, Rauzy graphs, or Anderson-Putnam complex, depending on the community. Combining this with a previous result, the present work justifies rigorously the accuracy and reliability of algorithmic methods used to compute numerically the spectra of a large class of self-adjoint operators. The so-called Hamiltonians describe the effective dynamic of a quantum particle in aperiodic media. No restrictions on the structure of these operators other than general regularity assumptions are imposed. In particular, nearest-neighbor correlation is not necessary. Examples for the Fibonacci and the Golay-Rudin-Shapiro sequences are explicitly provided illustrating this discussion. While the first sequence has been thoroughly studied by physicists and mathematicians alike, a shroud of mystery still surrounds the latter when it comes to spectral properties. In light of this, the present paper gives a new result here that might help uncovering a solution.
This paper further improves the Lie group method with Magnus expansion proposed in a previous paper by the authors, to solve some types of direct singular Sturm-Liouville problems. Next, a concrete implementation to the inverse Sturm-Liouville problem algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm-Liouville problems of higher order (for n=2,4) are verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied. In conclusion, this work provides a method that can be adapted successfully for solving a direct (regular/singular) or inverse Sturm-Liouville problem (SLP) of an arbitrary order with arbitrary boundary conditions.
LetH be a Schrodinger operator defined on a noncompact Riemannianmanifold Omega, and let W is an element of L-infinity (Omega; R). Suppose that the operator H + W is critical in Omega, and let phi be the corresponding Agmon ground state. We prove that if u is a generalized eigenfunction ofH satisfying vertical bar u vertical bar <= C-phi in Omega for some constant C > 0, then the corresponding eigenvalue is in the spectrum of H. The conclusion also holds true if for some K is an element of Omega the operator H admits a positive solution in (Omega) over bar = Omega \ K, and vertical bar u vertical bar <= C psi in (Omega) over bar for some constant C > 0, where psi is a positive solution of minimal growth in a neighborhood of infinity in Omega. Under natural assumptions, this result holds also in the context of infinite graphs, and Dirichlet forms.
We describe a new, original approach to the modelling of the Earth's magnetic field. The overall objective of this study is to reliably render fast variations of the core field and its secular variation. This method combines a sequential modelling approach, a Kalman filter, and a correlation-based modelling step. Sources that most significantly contribute to the field measured at the surface of the Earth are modelled. Their separation is based on strong prior information on their spatial and temporal behaviours. We obtain a time series of model distributions which display behaviours similar to those of recent models based on more classic approaches, particularly at large temporal and spatial scales. Interesting new features and periodicities are visible in our models at smaller time and spatial scales. An important aspect of our method is to yield reliable error bars for all model parameters. These errors, however, are only as reliable as the description of the different sources and the prior information used are realistic. Finally, we used a slightly different version of our method to produce candidate models for the thirteenth edition of the International Geomagnetic Reference Field.
Renormalisation and locality
(2020)
Relationship between large-scale ionospheric field-aligned currents and electron/ion precipitations
(2020)
In this study, we have derived field-aligned currents (FACs) from magnetometers onboard the Defense Meteorological Satellite Project (DMSP) satellites. The magnetic latitude versus local time distribution of FACs from DMSP shows comparable dependences with previous findings on the intensity and orientation of interplanetary magnetic field (IMF)B(y)andB(z)components, which confirms the reliability of DMSP FAC data set. With simultaneous measurements of precipitating particles from DMSP, we further investigate the relation between large-scale FACs and precipitating particles. Our result shows that precipitation electron and ion fluxes both increase in magnitude and extend to lower latitude for enhanced southward IMFBz, which is similar to the behavior of FACs. Under weak northward and southwardB(z)conditions, the locations of the R2 current maxima, at both dusk and dawn sides and in both hemispheres, are found to be close to the maxima of the particle energy fluxes; while for the same IMF conditions, R1 currents are displaced further to the respective particle flux peaks. Largest displacement (about 3.5 degrees) is found between the downward R1 current and ion flux peak at the dawn side. Our results suggest that there exists systematic differences in locations of electron/ion precipitation and large-scale upward/downward FACs. As outlined by the statistical mean of these two parameters, the FAC peaks enclose the particle energy flux peaks in an auroral band at both dusk and dawn sides. Our comparisons also found that particle precipitation at dawn and dusk and in both hemispheres maximizes near the mean R2 current peaks. The particle precipitation flux maxima closer to the R1 current peaks are lower in magnitude. This is opposite to the known feature that R1 currents are on average stronger than R2 currents.
Inferring causal relations from observational time series data is a key problem across science and engineering whenever experimental interventions are infeasible or unethical. Increasing data availability over the past few decades has spurred the development of a plethora of causal discovery methods, each addressing particular challenges of this difficult task. In this paper, we focus on an important challenge that is at the core of time series causal discovery: regime-dependent causal relations. Often dynamical systems feature transitions depending on some, often persistent, unobserved background regime, and different regimes may exhibit different causal relations. Here, we assume a persistent and discrete regime variable leading to a finite number of regimes within which we may assume stationary causal relations. To detect regime-dependent causal relations, we combine the conditional independence-based PCMCI method [based on a condition-selection step (PC) followed by the momentary conditional independence (MCI) test] with a regime learning optimization approach. PCMCI allows for causal discovery from high-dimensional and highly correlated time series. Our method, Regime-PCMCI, is evaluated on a number of numerical experiments demonstrating that it can distinguish regimes with different causal directions, time lags, and sign of causal links, as well as changes in the variables' autocorrelation. Furthermore, Regime-PCMCI is employed to observations of El Nino Southern Oscillation and Indian rainfall, demonstrating skill also in real-world datasets.
Aim Quantitative and kinetic insights into the drug exposure-disease response relationship might enhance our knowledge on loss of response and support more effective monitoring of inflammatory activity by biomarkers in patients with inflammatory bowel disease (IBD) treated with infliximab (IFX). This study aimed to derive recommendations for dose adjustment and treatment optimisation based on mechanistic characterisation of the relationship between IFX serum concentration and C-reactive protein (CRP) concentration. <br /> Methods Data from an investigator-initiated trial included 121 patients with IBD during IFX maintenance treatment. Serum concentrations of IFX, antidrug antibodies (ADA), CRP, and disease-related covariates were determined at the mid-term and end of a dosing interval. Data were analysed using a pharmacometric nonlinear mixed-effects modelling approach. An IFX exposure-CRP model was generated and applied to evaluate dosing regimens to achieve CRP remission. <br /> Results The generated quantitative model showed that IFX has the potential to inhibit up to 72% (9% relative standard error [RSE]) of CRP synthesis in a patient. IFX concentration leading to 90% of the maximum CRP synthesis inhibition was 18.4 mu g/mL (43% RSE). Presence of ADA was the most influential factor on IFX exposure. With standard dosing strategy, >= 55% of ADA+ patients experienced CRP nonremission. Shortening the dosing interval and co-therapy with immunomodulators were found to be the most beneficial strategies to maintain CRP remission. <br /> Conclusions With the generated model we could for the first time establish a robust relationship between IFX exposure and CRP synthesis inhibition, which could be utilised for treatment optimisation in IBD patients.
The XI international conference Stochastic and Analytic Methods in Mathematical Physics was held in Yerevan 2 – 7 September 2019 and was dedicated to the memory of the great mathematician Robert Adol’fovich Minlos, who passed away in January 2018.
The present volume collects a large majority of the contributions presented at the conference on the following domains of contemporary interest: classical and quantum statistical physics, mathematical methods in quantum mechanics, stochastic analysis, applications of point processes in statistical mechanics. The authors are specialists from Armenia, Czech Republic, Denmark, France, Germany, Italy, Japan, Lithuania, Russia, UK and Uzbekistan.
A particular aim of this volume is to offer young scientists basic material in order to inspire their future research in the wide fields presented here.
Flood loss modeling is a central component of flood risk analysis. Conventionally, this involves univariable and deterministic stage-damage functions. Recent advancements in the field promote the use of multivariable and probabilistic loss models, which consider variables beyond inundation depth and account for prediction uncertainty. Although companies contribute significantly to total loss figures, novel modeling approaches for companies are lacking. Scarce data and the heterogeneity among companies impede the development of company flood loss models. We present three multivariable flood loss models for companies from the manufacturing, commercial, financial, and service sector that intrinsically quantify prediction uncertainty. Based on object-level loss data (n = 1,306), we comparatively evaluate the predictive capacity of Bayesian networks, Bayesian regression, and random forest in relation to deterministic and probabilistic stage-damage functions, serving as benchmarks. The company loss data stem from four postevent surveys in Germany between 2002 and 2013 and include information on flood intensity, company characteristics, emergency response, private precaution, and resulting loss to building, equipment, and goods and stock. We find that the multivariable probabilistic models successfully identify and reproduce essential relationships of flood damage processes in the data. The assessment of model skill focuses on the precision of the probabilistic predictions and reveals that the candidate models outperform the stage-damage functions, while differences among the proposed models are negligible. Although the combination of multivariable and probabilistic loss estimation improves predictive accuracy over the entire data set, wide predictive distributions stress the necessity for the quantification of uncertainty.
The IGRF offers an important incentive for testing algorithms predicting the Earth's magnetic field changes, known as secular variation (SV), in a 5-year range. Here, we present a SV candidate model for the 13th IGRF that stems from a sequential ensemble data assimilation approach (EnKF). The ensemble consists of a number of parallel-running 3D-dynamo simulations. The assimilated data are geomagnetic field snapshots covering the years 1840 to 2000 from the COV-OBS.x1 model and for 2001 to 2020 from the Kalmag model. A spectral covariance localization method, considering the couplings between spherical harmonics of the same equatorial symmetry and same azimuthal wave number, allows decreasing the ensemble size to about a 100 while maintaining the stability of the assimilation. The quality of 5-year predictions is tested for the past two decades. These tests show that the assimilation scheme is able to reconstruct the overall SV evolution. They also suggest that a better 5-year forecast is obtained keeping the SV constant compared to the dynamically evolving SV. However, the quality of the dynamical forecast steadily improves over the full assimilation window (180 years). We therefore propose the instantaneous SV estimate for 2020 from our assimilation as a candidate model for the IGRF-13. The ensemble approach provides uncertainty estimates, which closely match the residual differences with respect to the IGRF-13. Longer term predictions for the evolution of the main magnetic field features over a 50-year range are also presented. We observe the further decrease of the axial dipole at a mean rate of 8 nT/year as well as a deepening and broadening of the South Atlantic Anomaly. The magnetic dip poles are seen to approach an eccentric dipole configuration.
Pinned Gibbs processes
(2020)
Partial clones
(2020)
A set C of operations defined on a nonempty set A is said to be a clone if C is closed under composition of operations and contains all projection mappings. The concept of a clone belongs to the algebraic main concepts and has important applications in Computer Science. A clone can also be regarded as a many-sorted algebra where the sorts are the n-ary operations defined on set A for all natural numbers n >= 1 and the operations are the so-called superposition operations S-m(n) for natural numbers m, n >= 1 and the projection operations as nullary operations. Clones generalize monoids of transformations defined on set A and satisfy three clone axioms. The most important axiom is the superassociative law, a generalization of the associative law. If the superposition operations are partial, i.e. not everywhere defined, instead of the many-sorted clone algebra, one obtains partial many-sorted algebras, the partial clones. Linear terms, linear tree languages or linear formulas form partial clones. In this paper, we give a survey on partial clones and their properties.
Synthetic Aperture Radar (SAR) amplitude measurements from spaceborne sensors are sensitive to surface roughness conditions near their radar wavelength. These backscatter signals are often exploited to assess the roughness of plowed agricultural fields and water surfaces, and less so to complex, heterogeneous geological surfaces. The bedload of mixed sand- and gravel-bed rivers can be considered a mixture of smooth (compacted sand) and rough (gravel) surfaces. Here, we assess backscatter gradients over a large high-mountain alluvial river in the eastern Central Andes with aerially exposed sand and gravel bedload using X-band TerraSAR-X/TanDEM-X, C-band Sentinel-1, and L-band ALOS-2 PALSAR-2 radar scenes. In a first step, we present theory and hypotheses regarding radar response to an alluvial channel bed. We test our hypotheses by comparing backscatter responses over vegetation-free endmember surfaces from inside and outside of the active channel-bed area. We then develop methods to extract smoothed backscatter gradients downstream along the channel using kernel density estimates. In a final step, the local variability of sand-dominated patches is analyzed using Fourier frequency analysis, by fitting stretched-exponential and power-law regression models to the 2-D power spectrum of backscatter amplitude. We find a large range in backscatter depending on the heterogeneity of contiguous smooth- and rough-patches of bedload material. The SAR amplitude signal responds primarily to the fraction of smooth-sand bedload, but is further modified by gravel elements. The sensitivity to gravel is more apparent in longer wavelength L-band radar, whereas C- and X-band is sensitive only to sand variability. Because the spatial extent of smooth sand patches in our study area is typically< 50 m, only higher resolution sensors (e.g., TerraSAR-X/TanDEM-X) are useful for power spectrum analysis. Our results show the potential for mapping sand-gravel transitions and local geomorphic complexity in alluvial rivers with aerially exposed bedload using SAR amplitude.
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
Let M be a compact manifold of dimension n. In this paper, we introduce the Mass Function a >= 0 bar right arrow X-+(M)(a) (resp. a >= 0 bar right arrow X--(M)(a)) which is defined as the supremum (resp. infimum) of the masses of all metrics on M whose Yamabe constant is larger than a and which are flat on a ball of radius 1 and centered at a point p is an element of M. Here, the mass of a metric flat around p is the constant term in the expansion of the Green function of the conformal Laplacian at p. We show that these functions are well defined and have many properties which allow to obtain applications to the Yamabe invariant (i.e. the supremum of Yamabe constants over the set of all metrics on M).
We construct marked Gibbs point processes in R-d under quite general assumptions. Firstly, we allow for interaction functionals that may be unbounded and whose range is not assumed to be uniformly bounded. Indeed, our typical interaction admits an a.s. finite but random range. Secondly, the random marks-attached to the locations in R-d-belong to a general normed space G. They are not bounded, but their law should admit a super-exponential moment. The approach used here relies on the so-called entropy method and large-deviation tools in order to prove tightness of a family of finite-volume Gibbs point processes. An application to infinite-dimensional interacting diffusions is also presented.
The purpose of this paper is to build an algebraic framework suited to regularize branched structures emanating from rooted forests and which encodes the locality principle. This is achieved by means of the universal properties in the locality framework of properly decorated rooted forests. These universal properties are then applied to derive the multivariate regularization of integrals indexed by rooted forests. We study their renormalization, along the lines of Kreimer's toy model for Feynman integrals.
Large emissions
(2020)
We investigate if kernel regularization methods can achieve minimax convergence rates over a source condition regularity assumption for the target function. These questions have been considered in past literature, but only under specific assumptions about the decay, typically polynomial, of the spectrum of the the kernel mapping covariance operator. In the perspective of distribution-free results, we investigate this issue under much weaker assumption on the eigenvalue decay, allowing for more complex behavior that can reflect different structure of the data at different scales.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
The Coulomb failure stress (CFS) criterion is the most commonly used method for predicting spatial distributions of aftershocks following large earthquakes. However, large uncertainties are always associated with the calculation of Coulomb stress change. The uncertainties mainly arise due to nonunique slip inversions and unknown receiver faults; especially for the latter, results are highly dependent on the choice of the assumed receiver mechanism. Based on binary tests (aftershocks yes/no), recent studies suggest that alternative stress quantities, a distance-slip probabilistic model as well as deep neural network (DNN) approaches, all are superior to CFS with predefined receiver mechanism. To challenge this conclusion, which might have large implications, we use 289 slip inversions from SRCMOD database to calculate more realistic CFS values for a layered half-space and variable receiver mechanisms. We also analyze the effect of the magnitude cutoff, grid size variation, and aftershock duration to verify the use of receiver operating characteristic (ROC) analysis for the ranking of stress metrics. The observations suggest that introducing a layered half-space does not improve the stress maps and ROC curves. However, results significantly improve for larger aftershocks and shorter time periods but without changing the ranking. We also go beyond binary testing and apply alternative statistics to test the ability to estimate aftershock numbers, which confirm that simple stress metrics perform better than the classic Coulomb failure stress calculations and are also better than the distance-slip probabilistic model.
This thesis aims at presenting in an organized fashion the required basics to understand the Glauber dynamics as a way of simulating configurations according to the Gibbs distribution of the Curie-Weiss Potts model. Therefore, essential aspects of discrete-time Markov chains on a finite state space are examined, especially their convergence behavior and related mixing times. Furthermore, special emphasis is placed on a consistent and comprehensive presentation of the Curie-Weiss Potts model and its analysis. Finally, the Glauber dynamics is studied in general and applied afterwards in an exemplary way to the Curie-Weiss model as well as the Curie-Weiss Potts model. The associated considerations are supplemented with two computer simulations aiming to show the cutoff phenomenon and the temperature dependence of the convergence behavior.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
The rational Krylov subspace method (RKSM) and the low-rank alternating directions implicit (LR-ADI) iteration are established numerical tools for computing low-rank solution factors of large-scale Lyapunov equations. In order to generate the basis vectors for the RKSM, or extend the low-rank factors within the LR-ADI method, the repeated solution to a shifted linear system of equations is necessary. For very large systems this solve is usually implemented using iterative methods, leading to inexact solves within this inner iteration (and therefore to "inexact methods"). We will show that one can terminate this inner iteration before full precision has been reached and still obtain very good accuracy in the final solution to the Lyapunov equation. In particular, for both the RKSM and the LR-ADI method we derive theory for a relaxation strategy (e.g. increasing the solve tolerance of the inner iteration, as the outer iteration proceeds) within the iterative methods for solving the large linear systems. These theoretical choices involve unknown quantities, therefore practical criteria for relaxing the solution tolerance within the inner linear system are then provided. The theory is supported by several numerical examples, which show that the total amount of work for solving Lyapunov equations can be reduced significantly.
Global numerical weather prediction (NWP) models have begun to resolve the mesoscale k(-5/3) range of the energy spectrum, which is known to impose an inherently finite range of deterministic predictability per se as errors develop more rapidly on these scales than on the larger scales. However, the dynamics of these errors under the influence of the synoptic-scale k(-3) range is little studied. Within a perfect-model context, the present work examines the error growth behavior under such a hybrid spectrum in Lorenz's original model of 1969, and in a series of identical-twin perturbation experiments using an idealized two-dimensional barotropic turbulence model at a range of resolutions. With the typical resolution of today's global NWP ensembles, error growth remains largely uniform across scales. The theoretically expected fast error growth characteristic of a k(-5/3) spectrum is seen to be largely suppressed in the first decade of the mesoscale range by the synoptic-scale k(-3) range. However, it emerges once models become fully able to resolve features on something like a 20-km scale, which corresponds to a grid resolution on the order of a few kilometers.