Refine
Year of publication
Document Type
- Article (1067)
- Monograph/Edited Volume (424)
- Doctoral Thesis (150)
- Preprint (95)
- Other (46)
- Postprint (16)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (8)
- Part of a Book (3)
Is part of the Bibliography
- yes (1837) (remove)
Keywords
- data assimilation (10)
- Bayesian inference (8)
- regularization (8)
- Cauchy problem (7)
- Fredholm property (7)
- Navier-Stokes equations (7)
- cluster expansion (7)
- discrepancy principle (7)
- ensemble Kalman filter (6)
- index (6)
Institute
- Institut für Mathematik (1837) (remove)
Natural gas can be temporarily stored in a variety of underground facilities, such as depleted gas and oil fields, natural aquifers and caverns in salt rocks. Being extensively monitored during operations, these systems provide a favourable opportunity to investigate how pressure varies in time and space and possibly induces/triggers earthquakes on nearby faults. Elaborate and detailed numerical modelling techniques are often applied to study gas reservoirs. Here we show the possibilities and discuss the limitations of a flexible and easily formulated tool that can be straightforwardly applied to simulate temporal pore-pressure variations and study the relation with recorded microseismic events. We use the software POEL (POroELastic diffusion and deformation) which computes the poroelastic response to fluid injection/extraction in a horizontally layered poroelastic structure. We further develop its application to address the presence of vertical impermeable faults bounding the reservoir and of multiple injection/extraction sources. Exploiting available information on the reservoir geometry and physical parameters, and records of injection/extraction rates for a gas reservoir in southern Europe, we perform an extensive parametric study considering different model configurations. Comparing modelled spatiotemporal pore-pressure variations with in situ measurements, we show that the inclusion of vertical impermeable faults provides an improvement in reproducing the observations and results in pore-pressure accumulation near the faults and in a variation of the temporal pore-pressure diffusion pattern. To study the relation between gas storage activity and recorded local microseismicity, we applied different seismicity models based on the estimated porepressure distribution. This analysis helps to understand the spatial distribution of seismicity and its temporal modulation. The results show that the observed microseismicity could be partly linked to the storage activity, but the contribution of tectonic background seismicity cannot be excluded.
We introduce the concept of TRAP (Traces and Permutations), which can roughly be viewed as a wheeled PROP (Products and Permutations) without unit. TRAPs are equipped with a horizontal concatenation and partial trace maps.
Continuous morphisms on an infinite-dimensional topological space and smooth kernels (respectively, smoothing operators) on a closed manifold form a TRAP but not a wheeled PROP.
We build the free objects in the category of TRAPs as TRAPs of graphs and show that a TRAP can be completed to a unitary TRAP (or wheeled PROP).
We further show that it can be equipped with a vertical concatenation, which on the TRAP of linear homomorphisms of a vector space, amounts to the usual composition. The vertical concatenation in the TRAP of smooth kernels gives rise to generalised convolutions.
Graphs whose vertices are decorated by smooth kernels (respectively, smoothing operators) on a closed manifold form a TRAP. From their universal properties we build smooth amplitudes associated with the graph.
We prove that optimal lower eigenvalue estimates of Zhong-Yang type as well as a Cheng-type upper bound for the first eigenvalue hold on closed manifolds assuming only a Kato condition on the negative part of the Ricci curvature.
This generalizes all earlier results on Lp-curvature assumptions.
Moreover, we introduce the Kato condition on compact manifolds with boundary with respect to the Neumann Laplacian, leading to Harnack estimates for the Neumann heat kernel and lower bounds for all Neumann eigenvalues, which provides a first insight in handling variable Ricci curvature assumptions in this case.
Each completely regular semigroup is a semilattice of completely simple semigroups. The more specific concept of a strong semilattice provides the concrete product between two arbitrary elements.
We characterize strong semilattices of rectangular groups by so-called disjunctions of identities. Disjunctions of identities generalize the classical concept of an identity and of a variety, respectively.
The rectangular groups will be on the one hand left zero semigroups and right zero semigroups and on the other hand groups of exponent p is an element of P, where P is any set of pairwise coprime natural numbers.
Reentrant tensegrity
(2021)
We present a three-periodic, chiral, tensegrity structure and demonstrate that it is auxetic. Our tensegrity structure is constructed using the chiral symmetry Pi(+) cylinder packing, transforming cylinders to elastic elements and cylinder contacts to incompressible rods. The resulting structure displays local reentrant geometry at its vertices and is shown to be auxetic when modeled as an equilibrium configuration of spatial constraints subject to a quasi-static deformation. When the structure is subsequently modeled as a lattice material with elastic elements, the auxetic behavior is again confirmed through finite element modeling. The cubic symmetry of the original structure means that the auxetic behavior is observed in both perpendicular directions and is close to isotropic in magnitude. This structure could be the simplest three-dimensional analog to the two-dimensional reentrant honeycomb. This, alongside the chirality of the structure, makes it an interesting design target for multifunctional materials.
Devising optimal interventions for constraining stochastic systems is a challenging endeavor that has to confront the interplay between randomness and dynamical nonlinearity.
Existing intervention methods that employ stochastic path sampling scale poorly with increasing system dimension and are slow to converge.
Here we propose a generally applicable and practically feasible methodology that computes the optimal interventions in a noniterative scheme.
We formulate the optimal dynamical adjustments in terms of deterministically sampled probability flows approximated by an interacting particle system.
Applied to several biologically inspired models, we demonstrate that our method provides the necessary optimal controls in settings with terminal, transient, or generalized collective state constraints and arbitrary system dynamics.
We consider the case of scattering by several obstacles in Rd for d ≥ 2.
In this setting, the absolutely continuous part of the Laplace operator Δ with Dirichlet boundary conditions and the free Laplace operator Δ0 are unitarily equivalent.
For suitable functions that decay sufficiently fast, we have that the difference g(Δ) - g(Δ0) is a trace-class operator and its trace is described by the Krein spectral shift function.
In this article, we study the contribution to the trace (and hence the Krein spectral shift function) that arises from assembling several obstacles relative to a setting where the obstacles are completely separated. In the case of two obstacles, we consider the Laplace operators Δ1 and Δ2 obtained by imposing Dirichlet boundary conditions only on one of the objects.
Our main result in this case states that then g(Δ) - g(Δ1) - g(Δ2) C g(Δ0) is a trace-class operator for a much larger class of functions (including functions of polynomial growth) and that this trace may still be computed by a modification of the Birman–Krein formula. In case g(x) D x 2 , 1 the relative trace has a physical meaning as the vacuum energy of the massless scalar field and is expressible as an integral involving boundary layer operators.
Such integrals have been derived in the physics literature using nonrigorous path integral derivations and our formula provides both a rigorous justification as well as a generalization.
We present the extension of the Kalmag model, proposed as a candidate for IGRF-13, to the twentieth century.
The dataset serving its derivation has been complemented by new measurements coming from satellites, ground-based observatories and land, marine and airborne surveys.
As its predecessor, this version is derived from a combination of a Kalman filter and a smoothing algorithm, providing mean models and associated uncertainties. These quantities permit a precise estimation of locations where mean solutions can be considered as reliable or not.
The temporal resolution of the core field and the secular variation was set to 0.1 year over the 122 years the model is spanning.
Nevertheless, it can be shown through ensembles a posteriori sampled, that this resolution can be effectively achieved only by a limited amount of spatial scales and during certain time periods.
Unsurprisingly, highest accuracy in both space and time of the core field and the secular variation is achieved during the CHAMP and Swarm era. In this version of Kalmag, a particular effort was made for resolving the small-scale lithospheric field.
Under specific statistical assumptions, the latter was modeled up to spherical harmonic degree and order 1000, and signal from both satellite and survey measurements contributed to its development.
External and induced fields were jointly estimated with the rest of the model. We show that their large scales could be accurately extracted from direct measurements whenever the latter exhibit a sufficiently high temporal coverage.
Temporally resolving these fields down to 3 hours during the CHAMP and Swarm missions, gave us access to the link between induced and magnetospheric fields. In particular, the period dependence of the driving signal on the induced one could be directly observed.
The model is available through various physical and statistical quantities on a dedicated website at https://ionocovar.agnld.uni-potsdam.de/Kalmag/.
We consider the initial value problem for the Navier-Stokes equations over R-3 x [0, T] with time T > 0 in the spatially periodic setting.
We prove that it induces open injective mappings A(s): B-1(s) -> B-2(s-1) where B-1(s), B-2(s-1) are elements from scales of specially constructed function spaces of Bochner-Sobolev typeparametrized with the smoothness index s is an element of N.
Finally, we prove that a map Asis surjective if and only if the inverse image A(s)(- 1) (K) of any pre compact set K from the range of the map Asis bounded in the Bochner space L-s([0, T], L-r(T-3))with the Ladyzhenskaya-Prodi-Serrin numbers s, r.
We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model.
This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O(10(4)) model runs, or more.
Moreover, the forward model is often given as a black box or is impractical to differentiate.
Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems.
The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator.
Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it.
This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior.
Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach.
The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model.
Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically.
The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems.
In this paper, we define a variant of Roe algebras for spaces with cylindrical ends and use this to study questions regarding existence and classification of metrics of positive scalar curvature on such manifolds which are collared on the cylindrical end.
We discuss how our constructions are related to relative higher index theory as developed by Chang, Weinberger, and Yu and use this relationship to define higher rho-invariants for positive scalar curvature metrics on manifolds with boundary.
This paves the way for the classification of these metrics.
Finally, we use the machinery developed here to give a concise proof of a result of Schick and the author, which relates the relative higher index with indices defined in the presence of positive scalar curvature on the boundary.
In this paper we consider surfaces which are critical points of the Willmore functional subject to constrained area.
In the case of small area we calculate the corrections to the intrinsic geometry induced by the ambient curvature.
These estimates together with the choice of an adapted geometric center of mass lead to refined position estimates in relation to the scalar curvature of the ambient manifold.
As the loop space of a Riemannian manifold is infinite-dimensional, it is a non-trivial problem to make sense of the "top degree component " of a differential form on it.
In this paper, we show that a formula from finite dimensions generalizes to assign a sensible "top degree component " to certain composite forms, obtained by wedging with the exponential (in the exterior algebra) of the canonical presymplectic 2-form on the loop space.
This construction is a crucial ingredient for the definition of the supersymmetric path integral on the loop space.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
We present general existence and uniqueness results for marked models with pair interactions, exemplified through Gibbs point processes on path space.
More precisely, we study a class of infinite-dimensional diffusions under Gibbsian interactions, in the context of marked point configurations: the starting points belong to R-d, and the marks are the paths of Langevin diffusions.
We use the entropy method to prove existence of an infinite-volume Gibbs point process and use cluster expansion tools to provide an explicit activity domain in which uniqueness holds.
The Gutenberg-Richter (GR) and the Omori-Utsu (OU) law describe the earthquakes' energy release and temporal clustering and are thus of great importance for seismic hazard assessment. Motivated by experimental results, which indicate stress-dependent parameters, we consider a combined global data set of 127 main shock-aftershock sequences and perform a systematic study of the relationship between main shock-induced stress changes and associated seismicity patterns. For this purpose, we calculate space-dependent Coulomb Stress (& UDelta;CFS) and alternative receiver-independent stress metrics in the surrounding of the main shocks. Our results indicate a clear positive correlation between the GR b-value and the induced stress, contrasting expectations from laboratory experiments and suggesting a crucial role of structural heterogeneity and strength variations. Furthermore, we demonstrate that the aftershock productivity increases nonlinearly with stress, while the OU parameters c and p systematically decrease for increasing stress changes. Our partly unexpected findings can have an important impact on future estimations of the aftershock hazard.
This paper deals with the long-term behavior of positive operator semigroups on spaces of bounded functions and of signed measures, which have applications to parabolic equations with unbounded coefficients and to stochas-tic analysis. The main results are a Tauberian type theorem characterizing the convergence to equilibrium of strongly Feller semigroups and a generalization of a classical convergence theorem of Doob. None of these results requires any kind of time regularity of the semigroup.
Deriving mechanism-based pharmacodynamic models by reducing quantitative systems pharmacology models
(2023)
Quantitative systems pharmacology (QSP) models integrate comprehensive qualitative and quantitative knowledge about pharmacologically relevant processes. We previously proposed a first approach to leverage the knowledge in QSP models to derive simpler, mechanism-based pharmacodynamic (PD) models. Their complexity, however, is typically still too large to be used in the population analysis of clinical data. Here, we extend the approach beyond state reduction to also include the simplification of reaction rates, elimination of reactions, and analytic solutions. We additionally ensure that the reduced model maintains a prespecified approximation quality not only for a reference individual but also for a diverse virtual population. We illustrate the extended approach for the warfarin effect on blood coagulation. Using the model-reduction approach, we derive a novel small-scale warfarin/international normalized ratio model and demonstrate its suitability for biomarker identification. Due to the systematic nature of the approach in comparison with empirical model building, the proposed model-reduction algorithm provides an improved rationale to build PD models also from QSP models in other applications.
Cell-level systems biology model to study inflammatory bowel diseases and their treatment options
(2023)
To help understand the complex and therapeutically challenging inflammatory bowel diseases (IBDs), we developed a systems biology model of the intestinal immune system that is able to describe main aspects of IBD and different treatment modalities thereof. The model, including key cell types and processes of the mucosal immune response, compiles a large amount of isolated experimental findings from literature into a larger context and allows for simulations of different inflammation scenarios based on the underlying data and assumptions. In the context of a large and diverse virtual IBD population, we characterized the patients based on their phenotype (in contrast to healthy individuals, they developed persistent inflammation after a trigger event) rather than on a priori assumptions on parameter differences to a healthy individual. This allowed to reproduce the enormous diversity of predispositions known to lead to IBD. Analyzing different treatment effects, the model provides insight into characteristics of individual drug therapy. We illustrate for anti-TNF-alpha therapy, how the model can be used (i) to decide for alternative treatments with best prospects in the case of nonresponse, and (ii) to identify promising combination therapies with other available treatment options.
We present a Reduced Order Model (ROM) which exploits recent developments in Physics Informed Neural Networks (PINNs) for solving inverse problems for the Navier-Stokes equations (NSE). In the proposed approach, the presence of simulated data for the fluid dynamics fields is assumed. A POD-Galerkin ROM is then constructed by applying POD on the snapshots matrices of the fluid fields and performing a Galerkin projection of the NSE (or the modified equations in case of turbulence modeling) onto the POD reduced basis. A POD-Galerkin PINN ROM is then derived by introducing deep neural networks which approximate the reduced outputs with the input being time and/or parameters of the model. The neural networks incorporate the physical equations (the POD-Galerkin reduced equations) into their structure as part of the loss function. Using this approach, the reduced model is able to approximate unknown parameters such as physical constants or the boundary conditions. A demonstration of the applicability of the proposed ROM is illustrated by three cases which are the steady flow around a backward step, the flow around a circular cylinder and the unsteady turbulent flow around a surface mounted cubic obstacle.
Introduction:
Hydrocortisone is the standard of care in cortisol replacement therapy for congenital adrenal hyperplasia patients. Challenges in mimicking cortisol circadian rhythm and dosing individualization can be overcome by the support of mathematical modelling. Previously, a non-linear mixed-effects (NLME) model was developed based on clinical hydrocortisone pharmacokinetic (PK) pediatric and adult data. Additionally, a physiologically-based pharmacokinetic (PBPK) model was developed for adults and a pediatric model was obtained using maturation functions for relevant processes. In this work, a middle-out approach was applied. The aim was to investigate whether PBPK-derived maturation functions could provide a better description of hydrocortisone PK inter-individual variability when implemented in the NLME framework, with the goal of providing better individual predictions towards precision dosing at the patient level.
Methods:
Hydrocortisone PK data from 24 adrenal insufficiency pediatric patients and 30 adult healthy volunteers were used for NLME model development, while the PBPK model and maturation functions of clearance and cortisol binding globulin (CBG) were developed based on previous studies published in the literature.
Results:
Clearance (CL) estimates from both approaches were similar for children older than 1 year (CL/F increasing from around 150 L/h to 500 L/h), while CBG concentrations differed across the whole age range (CBG(NLME) stable around 0.5 mu M vs. steady increase from 0.35 to 0.8 mu M for CBG (PBPK)). PBPK-derived maturation functions were subsequently included in the NLME model. After inclusion of the maturation functions, none, a part of, or all parameters were re-estimated. However, the inclusion of CL and/or CBG maturation functions in the NLME model did not result in improved model performance for the CL maturation function (& UDelta;OFV > -15.36) and the re-estimation of parameters using the CBG maturation function most often led to unstable models or individual CL prediction bias.
Discussion:
Three explanations for the observed discrepancies could be postulated, i) non-considered maturation of processes such as absorption or first-pass effect, ii) lack of patients between 1 and 12 months, iii) lack of correction of PBPK CL maturation functions derived from urinary concentration ratio data for the renal function relative to adults. These should be investigated in the future to determine how NLME and PBPK methods can work towards deriving insights into pediatric hydrocortisone PK.
The objectives of this study were the identification in (morbidly) obese and nonobese patients of (i) the most appropriate body size descriptor for fosfomycin dose adjustments and (ii) adequacy of the currently employed dosing regimens. Plasma and target site (interstitial fluid of subcutaneous adipose tissue) concentrations after fosfomycin administration (8 g) to 30 surgery patients (15 obese/15 nonobese) were obtained from a prospective clinical trial. After characterization of plasma and microdialysis-derived target site pharmacokinetics via population analysis, short-term infusions of fosfomycin 3 to 4 times daily were simulated. The adequacy of therapy was assessed by probability of pharmacokinetic/pharmacodynamic target attainment (PTA) analysis based on the unbound drug-related targets of an %fT(>= MIC) (the fraction of time that unbound fosfomycin concentrations exceed the MIC during 24 h) of 70 and an fAUC(0-24h)/MIC (the area under the concentration-time curve from 0 to 24 h for the unbound fraction of fosfomycin relative to the MIC) of 40.8 to 83.3. Lean body weight, fat mass, and creatinine clearance calculated via adjusted body weight (ABW) (CLCRCG_ABW) of all patients (body mass index [BMI] = 20.1 to 52.0 kg/m(2)) explained a considerable proportion of between-patient pharmacokinetic variability (up to 31.0% relative reduction). The steady-state unbound target site/plasma concentration ratio was 26.3% lower in (morbidly) obese than nonobese patients. For infections with fosfomycin-susceptible pathogens (MIC <= 16 mg/L), intermittent "high-dosage" intravenous (i.v.) fosfomycin (8 g, three times daily) was sufficient to treat patients with a CLCRCG_ABW of,130 mL/min, irrespective of the pharmacokinetic/pharmacodynamic indices considered. For infections by Pseudomonas aeruginosa with a MIC of 32 mg/L, when the index fAUC0-24h/MIC is applied, fosfomycin might represent a promising treatment option in obese and nonobese patients, especially in combination therapy to complement beta-lactams, in which carbapenem-resistant P. aeruginosa is critical. In conclusion, fosfomycin showed excellent target site penetration in obese and nonobese patients. Dosing should be guided by renal function rather than obesity status.
The drug concentrations targeted in meropenem and piperacillin/tazobactam therapy also depend on the susceptibility of the pathogen. Yet, the pathogen is often unknown, and antibiotic therapy is guided by empirical targets. To reliably achieve the targeted concentrations, dosing needs to be adjusted for renal function. We aimed to evaluate a meropenem and piperacillin/tazobactam monitoring program in intensive care unit (ICU) patients by assessing (i) the adequacy of locally selected empirical targets, (ii) if dosing is adequately adjusted for renal function and individual target, and (iii) if dosing is adjusted in target attainment (TA) failure. In a prospective, observational clinical trial of drug concentrations, relevant patient characteristics and microbiological data (pathogen, minimum inhibitory concentration (MIC)) for patients receiving meropenem or piperacillin/tazobactam treatment were collected. If the MIC value was available, a target range of 1-5 x MIC was selected for minimum drug concentrations of both drugs. If the MIC value was not available, 8-40 mg/L and 16-80 mg/L were selected as empirical target ranges for meropenem and piperacillin, respectively. A total of 356 meropenem and 216 piperacillin samples were collected from 108 and 96 ICU patients, respectively. The vast majority of observed MIC values was lower than the empirical target (meropenem: 90.0%, piperacillin: 93.9%), suggesting empirical target value reductions. TA was found to be low (meropenem: 35.7%, piperacillin 50.5%) with the lowest TA for severely impaired renal function (meropenem: 13.9%, piperacillin: 29.2%), and observed drug concentrations did not significantly differ between patients with different targets, indicating dosing was not adequately adjusted for renal function or target. Dosing adjustments were rare for both drugs (meropenem: 6.13%, piperacillin: 4.78%) and for meropenem irrespective of TA, revealing that concentration monitoring alone was insufficient to guide dosing adjustment. Empirical targets should regularly be assessed and adjusted based on local susceptibility data. To improve TA, scientific knowledge should be translated into easy-to-use dosing strategies guiding antibiotic dosing.
The Levenberg–Marquardt regularization for the backward heat equation with fractional derivative
(2022)
The backward heat problem with time-fractional derivative in Caputo's sense is studied. The inverse problem is severely ill-posed in the case when the fractional order is close to unity. A Levenberg-Marquardt method with a new a posteriori stopping rule is investigated. We show that optimal order can be obtained for the proposed method under a Hölder-type source condition. Numerical examples for one and two dimensions are provided.
Congenital adrenal hyperplasia (CAH) is the most common form of adrenal insufficiency in childhood; it requires cortisol replacement therapy with hydrocortisone (HC, synthetic cortisol) from birth and therapy monitoring for successful treatment. In children, the less invasive dried blood spot (DBS) sampling with whole blood including red blood cells (RBCs) provides an advantageous alternative to plasma sampling.
Potential differences in binding/association processes between plasma and DBS however need to be considered to correctly interpret DBS measurements for therapy monitoring. While capillary DBS samples would be used in clinical practice, venous cortisol DBS samples from children with adrenal insufficiency were analyzed due to data availability and to directly compare and thus understand potential differences between venous DBS and plasma. A previously published HC plasma pharmacokinetic (PK) model was extended by leveraging these DBS concentrations.
In addition to previously characterized binding of cortisol to albumin (linear process) and corticosteroid-binding globulin (CBG; saturable process), DBS data enabled the characterization of a linear cortisol association with RBCs, and thereby providing a quantitative link between DBS and plasma cortisol concentrations. The ratio between the observed cortisol plasma and DBS concentrations varies highly from 2 to 8. Deterministic simulations of the different cortisol binding/association fractions demonstrated that with higher blood cortisol concentrations, saturation of cortisol binding to CBG was observed, leading to an increase in all other cortisol binding fractions.
In conclusion, a mathematical PK model was developed which links DBS measurements to plasma exposure and thus allows for quantitative interpretation of measurements of DBS samples.
In this article we prove upper bounds for the Laplace eigenvalues lambda(k) below the essential spectrum for strictly negatively curved Cartan-Hadamard manifolds. Our bound is given in terms of k(2) and specific geometric data of the manifold. This applies also to the particular case of non-compact manifolds whose sectional curvature tends to -infinity, where no essential spectrum is present due to a theorem of Donnelly/Li. The result stands in clear contrast to Laplacians on graphs where such a bound fails to be true in general.
Satellite-measured tidal magnetic signals are of growing importance. These fields are mainly used to infer Earth's mantle conductivity, but also to derive changes in the oceanic heat content. We present a new Kalman filter-based method to derive tidal magnetic fields from satellite magnetometers: KALMAG. The method's advantage is that it allows to study a precisely estimated posterior error covariance matrix. We present the results of a simultaneous estimation of the magnetic signals of 8 major tides from 17 years of Swarm and CHAMP data. For the first time, robustly derived posterior error distributions are reported along with the reported tidal magnetic fields. The results are compared to other estimates that are either based on numerical forward models or on satellite inversions of the same data. For all comparisons, maximal differences and the corresponding globally averaged RMSE are reported. We found that the inter-product differences are comparable with the KALMAG-based errors only in a global mean sense. Here, all approaches give values of the same order, e.g., 0.09 nT-0.14 nT for M2. Locally, the KALMAG posterior errors are up to one order smaller than the inter-product differences, e.g., 0.12 nT vs. 0.96 nT for M2.
Diffusion maps is a manifold learning algorithm widely used for dimensionality reduction. Using a sample from a distribution, it approximates the eigenvalues and eigenfunctions of associated Laplace-Beltrami operators. Theoretical bounds on the approximation error are, however, generally much weaker than the rates that are seen in practice. This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus. For the data sampling (variance) component of the error we make spatially localized compact embedding estimates on certain Hardy spaces; we study the deterministic (bias) component as a perturbation of the Laplace-Beltrami operator's associated PDE and apply relevant spectral stability results. Using these approaches, we match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretization. We also introduce an alternative normalization for diffusion maps based on Sinkhorn weights. This normalization approximates a Langevin diffusion on the sample and yields a symmetric operator approximation. We prove that it has better convergence compared with the standard normalization on flat domains, and we present a highly efficient rigorous algorithm to compute the Sinkhorn weights.
We establish a new approach of treating elliptic boundary value problems (BVPs) on manifolds with boundary and regular corners, up to singularity order 2. Ellipticity and parametrices are obtained in terms of symbols taking values in algebras of BVPs on manifolds of corresponding lower singularity orders. Those refer to Boutet de Monvel's calculus of operators with the transmission property, see Boutet de Monvel (Acta Math 126:11-51, 1971) for the case of smooth boundary. On corner configuration operators act in spaces with multiple weights. We mainly study the case of upper left entries in the respective 2 x 2 operator block-matrices of such a calculus. Green operators in the sense of Boutet de Monvel (Acta Math 126:11-51, 1971) analogously appear in singular cases, and they are complemented by contributions of Mellin type. We formulate a result on ellipticity and the Fredholm property in weighted corner spaces, with parametrices of analogous kind.
Ground motion with strong-velocity pulses can cause significant damage to buildings and structures at certain periods; hence, knowing the period and velocity amplitude of such pulses is critical for earthquake structural engineering.
However, the physical factors relating the scaling of pulse periods with magnitude are poorly understood.
In this study, we investigate moderate but damaging earthquakes (M-w 6-7) and characterize ground- motion pulses using the method of Shahi and Baker (2014) while considering the potential static-offset effects.
We confirm that the within-event variability of the pulses is large. The identified pulses in this study are mostly from strike-slip-like earthquakes. We further perform simulations using the freq uency-wavenumber algorithm to investigate the causes of the variability of the pulse periods within and between events for moderate strike-slip earthquakes.
We test the effect of fault dips, and the impact of the asperity locations and sizes. The simulations reveal that the asperity properties have a high impact on the pulse periods and amplitudes at nearby stations.
Our results emphasize the importance of asperity characteristics, in addition to earthquake magnitudes for the occurrence and properties of pulses produced by the forward directivity effect.
We finally quantify and discuss within- and between-event variabilities of pulse properties at short distances.
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates. Therefore, we introduce a highly flexible, non-parametric representation for the spatially varying ETAS background intensity through a Gaussian process (GP) prior. Combined with classical triggering functions this results in a new model formulation, namely the GP-ETAS model. We enable tractable and efficient Gibbs sampling by deriving an augmented form of the GP-ETAS inference problem. This novel sampling approach allows us to assess the posterior model variables conditioned on observed earthquake catalogues, i.e., the spatial background intensity and the parameters of the triggering function. Empirical results on two synthetic data sets indicate that GP-ETAS outperforms standard models and thus demonstrate the predictive power for observed earthquake catalogues including uncertainty quantification for the estimated parameters. Finally, a case study for the l'Aquila region, Italy, with the devastating event on 6 April 2009, is presented.
Both ground- and satellite-based airglow imaging have significantly contributed to understanding the low-latitude ionosphere, especially the morphology and dynamics of the equatorial ionization anomaly (EIA). The NASA Global-scale Observations of the Limb and Disk (GOLD) mission focuses on far-ultraviolet airglow images from a geostationary orbit at 47.5 degrees W. This region is of particular interest at low magnetic latitudes because of the high magnetic declination (i.e., about -20 degrees) and proximity of the South Atlantic magnetic anomaly. In this study, we characterize an exciting feature of the nighttime EIA using GOLD observations from October 5, 2018 to June 30, 2020. It consists of a wavelike structure of a few thousand kilometers seen as poleward and equatorward displacements of the EIA-crests. Initial analyses show that the synoptic-scale structure is symmetric about the dip equator and appears nearly stationary with time over the night. In quasi-dipole coordinates, maxima poleward displacements of the EIA-crests are seen at about +/- 12 degrees latitude and around 20 and 60 degrees longitude (i.e., in geographic longitude at the dip equator, about 53 degrees W and 14 degrees W). The wavelike structure presents typical zonal wavelengths of about 6.7 x 10(3) km and 3.3 x 10(3) km. The structure's occurrence and wavelength are highly variable on a day-to-day basis with no apparent dependence on geomagnetic activity. In addition, a cluster or quasi-periodic wave train of equatorial plasma depletions (EPDs) is often detected within the synoptic-scale structure. We further outline the difference in observing these EPDs from FUV images and in situ measurements during a GOLD and Swarm mission conjunction.
Background:
Anti-TNFα monoclonal antibodies (mAbs) are a well-established treatment for patients with Crohn’s disease (CD). However, subtherapeutic concentrations of mAbs have been related to a loss of response during the first year of therapy1. Therefore, an appropriate dosing strategy is crucial to prevent the underexposure of mAbs for those patients. The aim of our study was to assess the impact of different dosing strategies (fixed dose or body size descriptor adapted) on drug exposure and the target concentration attainment for two different anti-TNFα mAbs: infliximab (IFX, body weight (BW)-based dosing) and certolizumab pegol (CZP, fixed dosing). For this purpose, a comprehensive pharmacokinetic (PK) simulation study was performed.
Methods:
A virtual population of 1000 clinically representative CD patients was generated based on the distribution of CD patient characteristics from an in-house clinical database (n = 116). Seven dosing regimens were investigated: fixed dose and per BW, lean BW (LBW), body surface area, height, body mass index and fat-free mass. The individual body size-adjusted doses were calculated from patient generated body size descriptor values. Then, using published PK models for IFX and CZP in CD patients2,3, for each patient, 1000 concentration–time profiles were simulated to consider the typical profile of a specific patient as well as the range of possible individual profiles due to unexplained PK variability across patients. For each dosing strategy, the variability in maximum and minimum mAb concentrations (Cmax and Cmin, respectively), area under the concentration-time curve (AUC) and the per cent of patients reaching target concentration were assessed during maintenance therapy.
Results:
For IFX and CZP, Cmin showed the highest variability between patients (CV ≈110% and CV ≈80%, respectively) with a similar extent across all dosing strategies. For IFX, the per cent of patients reaching the target (Cmin = 5 µg/ml) was similar across all dosing strategies (~15%). For CZP, the per cent of patients reaching the target average concentration of 17 µg/ml ranged substantially (52–71%), being the highest for LBW-adjusted dosing.
Conclusion:
By using a PK simulation approach, different dosing regimen of IFX and CZP revealed the highest variability for Cmin, the most commonly used PK parameter guiding treatment decisions, independent upon dosing regimen. Our results demonstrate similar target attainment with fixed dosing of IFX compared with currently recommended BW-based dosing. For CZP, the current fixed dosing strategy leads to comparable percentage of patients reaching target as the best performing body size-adjusted dosing (66% vs. 71%, respectively).
The Arnoldi process can be applied to inexpensively approximate matrix functions of the form f (A)v and matrix functionals of the form v*(f (A))*g(A)v, where A is a large square non-Hermitian matrix, v is a vector, and the superscript * denotes transposition and complex conjugation. Here f and g are analytic functions that are defined in suitable regions in the complex plane. This paper reviews available approximation methods and describes new ones that provide higher accuracy for essentially the same computational effort by exploiting available, but generally not used, moment information. Numerical experiments show that in some cases the modifications of the Arnoldi decompositions proposed can improve the accuracy of v*(f (A))*g(A)v about as much as performing an additional step of the Arnoldi process.
Hidden semi-Markov models generalise hidden Markov models by explicitly modelling the time spent in a given state, the so-called dwell time, using some distribution defined on the natural numbers. While the (shifted) Poisson and negative binomial distribution provide natural choices for such distributions, in practice, parametric distributions can lack the flexibility to adequately model the dwell times. To overcome this problem, a penalised maximum likelihood approach is proposed that allows for a flexible and data-driven estimation of the dwell-time distributions without the need to make any distributional assumption. This approach is suitable for direct modelling purposes or as an exploratory tool to investigate the latent state dynamics. The feasibility and potential of the suggested approach is illustrated in a simulation study and by modelling muskox movements in northeast Greenland using GPS tracking data. The proposed method is implemented in the R-package PHSMM which is available on CRAN.
In this paper, we develop the mathematical tools needed to explore isotopy classes of tilings on hyperbolic surfaces of finite genus, possibly nonorientable, with boundary, and punctured. More specifically, we generalize results on Delaney-Dress combinatorial tiling theory using an extension of mapping class groups to orbifolds, in turn using this to study tilings of covering spaces of orbifolds. Moreover, we study finite subgroups of these mapping class groups. Our results can be used to extend the Delaney-Dress combinatorial encoding of a tiling to yield a finite symbol encoding the complexity of an isotopy class of tilings. The results of this paper provide the basis for a complete and unambiguous enumeration of isotopically distinct tilings of hyperbolic surfaces.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
We show how to deduce Rellich inequalities from Hardy inequalities on infinite graphs. Specifically, the obtained Rellich inequality gives an upper bound on a function by the Laplacian of the function in terms of weighted norms. These weights involve the Hardy weight and a function which satisfies an eikonal inequality. The results are proven first for Laplacians and are extended to Schrodinger operators afterwards.
In this article, we propose an all-in-one statement which includes existence, uniqueness, regularity, and numerical approximations of mild solutions for a class of stochastic partial differential equations (SPDEs) with non-globally monotone nonlinearities. The proof of this result exploits the properties of an existing fully explicit space-time discrete approximation scheme, in particular the fact that it satisfies suitable a priori estimates. We also obtain almost sure and strong convergence of the approximation scheme to the mild solutions of the considered SPDEs. We conclude by applying the main result of the article to the stochastic Burgers equations with additive space-time white noise.
A sufficient quantitative understanding of aluminium (Al) toxicokinetics (TK) in man is still lacking, although highly desirable for risk assessment of Al exposure. Baseline exposure and the risk of contamination severely limit the feasibility of TK studies administering the naturally occurring isotope Al-27, both in animals and man. These limitations are absent in studies with Al-26 as a tracer, but tissue data are limited to animal studies. A TK model capable of inter-species translation to make valid predictions of Al levels in humans-especially in toxicological relevant tissues like bone and brain-is urgently needed. Here, we present: (i) a curated dataset which comprises all eligible studies with single doses of Al-26 tracer administered as citrate or chloride salts orally and/or intravenously to rats and humans, including ultra-long-term kinetic profiles for plasma, blood, liver, spleen, muscle, bone, brain, kidney, and urine up to 150 weeks; and (ii) the development of a physiology-based (PB) model for Al TK after intravenous and oral administration of aqueous Al citrate and Al chloride solutions in rats and humans. Based on the comprehensive curated Al-26 dataset, we estimated substance-dependent parameters within a non-linear mixed-effect modelling context. The model fitted the heterogeneous Al-26 data very well and was successfully validated against datasets in rats and humans. The presented PBTK model for Al, based on the most extensive and diverse dataset of Al exposure to date, constitutes a major advancement in the field, thereby paving the way towards a more quantitative risk assessment in humans.
We construct and examine the prototype of a deep learning-based ground-motion model (GMM) that is both fully data driven and nonergodic. We formulate ground-motion modeling as an image processing task, in which a specific type of neural network, the U-Net, relates continuous, horizontal maps of earthquake predictive parameters to sparse observations of a ground-motion intensity measure (IM). The processing of map-shaped data allows the natural incorporation of absolute earthquake source and observation site coordinates, and is, therefore, well suited to include site-, source-, and path-specific amplification effects in a nonergodic GMM. Data-driven interpolation of the IM between observation points is an inherent feature of the U-Net and requires no a priori assumptions. We evaluate our model using both a synthetic dataset and a subset of observations from the KiK-net strong motion network in the Kanto basin in Japan. We find that the U-Net model is capable of learning the magnitude???distance scaling, as well as site-, source-, and path-specific amplification effects from a strong motion dataset. The interpolation scheme is evaluated using a fivefold cross validation and is found to provide on average unbiased predictions. The magnitude???distance scaling as well as the site amplification of response spectral acceleration at a period of 1 s obtained for the Kanto basin are comparable to previous regional studies.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
In this work, we present Raman lidar data (from a Nd:YAG operating at 355 nm, 532 nm and 1064 nm) from the international research village Ny-Alesund for the time period of January to April 2020 during the Arctic haze season of the MOSAiC winter. We present values of the aerosol backscatter, the lidar ratio and the backscatter Angstrom exponent, though the latter depends on wavelength. The aerosol polarization was generally below 2%, indicating mostly spherical particles. We observed that events with high backscatter and high lidar ratio did not coincide. In fact, the highest lidar ratios (LR > 75 sr at 532 nm) were already found by January and may have been caused by hygroscopic growth, rather than by advection of more continental aerosol. Further, we performed an inversion of the lidar data to retrieve a refractive index and a size distribution of the aerosol. Our results suggest that in the free troposphere (above approximate to 2500 m) the aerosol size distribution is quite constant in time, with dominance of small particles with a modal radius well below 100 nm. On the contrary, below approximate to 2000 m in altitude, we frequently found gradients in aerosol backscatter and even size distribution, sometimes in accordance with gradients of wind speed, humidity or elevated temperature inversions, as if the aerosol was strongly modified by vertical displacement in what we call the "mechanical boundary layer". Finally, we present an indication that additional meteorological soundings during MOSAiC campaign did not necessarily improve the fidelity of air backtrajectories.
We prove a homology vanishing theorem for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Bochner on manifolds [3]. Specifically, we prove that if a graph has positive curvature at every vertex, then its first homology group is trivial, where the notion of homology that we use for graphs is the path homology developed by Grigor'yan, Lin, Muranov, and Yau [11]. We moreover prove that the fundamental group is finite for graphs with positive Bakry-' Emery curvature, analogous to a classic result of Myers on manifolds [22]. The proofs draw on several separate areas of graph theory, including graph coverings, gain graphs, and cycle spaces, in addition to the Bakry-Emery curvature, path homology, and graph homotopy. The main results follow as a consequence of several different relationships developed among these different areas. Specifically, we show that a graph with positive curvature cannot have a non-trivial infinite cover preserving 3-cycles and 4-cycles, and give a combinatorial interpretation of the first path homology in terms of the cycle space of a graph. Furthermore, we relate gain graphs to graph homotopy and the fundamental group developed by Grigor'yan, Lin, Muranov, and Yau [12], and obtain an alternative proof of their result that the abelianization of the fundamental group of a graph is isomorphic to the first path homology over the integers.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
We introduce the class of "smooth rough paths" and study their main properties. Working in a smooth setting allows us to discard sewing arguments and focus on algebraic and geometric aspects. Specifically, a Maurer-Cartan perspective is the key to a purely algebraic form of Lyons' extension theorem, the renormalization of rough paths following up on [Bruned et al.: A rough path perspective on renormalization, J. Funct. Anal. 277(11), 2019], as well as a related notion of "sum of rough paths". We first develop our ideas in a geometric rough path setting, as this best resonates with recent works on signature varieties, as well as with the renormalization of geometric rough paths. We then explore extensions to the quasi-geometric and the more general Hopf algebraic setting.