Refine
Year of publication
- 2023 (8)
- 2022 (35)
- 2021 (48)
- 2020 (90)
- 2019 (60)
- 2018 (67)
- 2017 (63)
- 2016 (71)
- 2015 (65)
- 2014 (54)
- 2013 (63)
- 2012 (60)
- 2011 (38)
- 2010 (44)
- 2009 (43)
- 2008 (25)
- 2007 (24)
- 2006 (78)
- 2005 (102)
- 2004 (86)
- 2003 (72)
- 2002 (79)
- 2001 (103)
- 2000 (94)
- 1999 (122)
- 1998 (129)
- 1997 (122)
- 1996 (73)
- 1995 (102)
- 1994 (77)
- 1993 (16)
- 1992 (19)
- 1991 (5)
- 1990 (1)
- 1980 (1)
Document Type
- Article (1069)
- Monograph/Edited Volume (427)
- Preprint (378)
- Doctoral Thesis (150)
- Other (46)
- Postprint (32)
- Review (16)
- Conference Proceeding (9)
- Master's Thesis (7)
- Part of a Book (3)
Language
- English (1864)
- German (265)
- French (7)
- Italian (3)
- Multiple languages (1)
Keywords
- random point processes (19)
- statistical mechanics (19)
- stochastic analysis (19)
- index (14)
- Fredholm property (12)
- boundary value problems (12)
- cluster expansion (10)
- data assimilation (10)
- regularization (10)
- elliptic operators (9)
Institute
- Institut für Mathematik (2140) (remove)
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.
We consider a class of ergodic Hamilton-Jacobi-Bellman (HJB) equations, related to large time asymptotics of non-smooth multiplicative functional of difusion processes. Under suitable ergodicity assumptions on the underlying difusion, we show existence of these asymptotics, and that they solve the related HJB equation in the viscosity sense.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
Transition path theory (TPT) for diffusion processes is a framework for analyzing the transitions of multiscale ergodic diffusion processes between disjoint metastable subsets of state space. Most methods for applying TPT involve the construction of a Markov state model on a discretization of state space that approximates the underlying diffusion process. However, the assumption of Markovianity is difficult to verify in practice, and there are to date no known error bounds or convergence results for these methods. We propose a Monte Carlo method for approximating the forward committor, probability current, and streamlines from TPT for diffusion processes. Our method uses only sample trajectory data and partitions of state space based on Voronoi tessellations. It does not require the construction of a Markovian approximating process. We rigorously prove error bounds for the approximate TPT objects and use these bounds to show convergence to their exact counterparts in the limit of arbitrarily fine discretization. We illustrate some features of our method by application to a process that solves the Smoluchowski equation on a triple-well potential.
For each compact subset K of the complex plane C which does not surround zero, the Riemann surface Sζ of the Riemann zeta function restricted to the critical half-strip 0 < Rs < 1/2 contains infinitely many schlicht copies of K lying ‘over’ K. If Sζ also contains at least one such copy, for some K which surrounds zero, then the Riemann hypothesis fails.
A function has vanishing mean oscillation (VMO) on R up(n) if its mean oscillation - the local average of its pointwise deviation from its mean value - both is uniformly bounded over all cubes within R up(n) and converges to zero with the volume of the cube. The more restrictive class of functions with vanishing lower oscillation (VLO) arises when the mean value is replaced by the minimum value in this definition. It is shown here that each VMO function is the difference of two functions in VLO.
We study those nonlinear partial differential equations which appear as Euler-Lagrange equations of variational problems. On defining weak boundary values of solutions to such equations we initiate the theory of Lagrangian boundary value problems in spaces of appropriate smoothness. We also analyse if the concept of mapping degree of current importance applies to Lagrangian problems.
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses diagonalization with a general, non- orthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The algorithm's efficiency stems from the special approximation of the cost function resulting in a sparse, block-diagonal Hessian to be used in the computation of the quasi-Newton update step. Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the proposed algorithm is a viable alternative to existing state-of-the-art joint diagonalization algorithms. The practical use of our algorithm is shown for blind source separation problems