Institut für Mathematik
Refine
Year of publication
Document Type
- Article (49)
- Postprint (10)
- Monograph/Edited Volume (1)
- Review (1)
Keywords
- data assimilation (6)
- Bayesian inference (5)
- ensemble Kalman filter (5)
- Data assimilation (3)
- gradient flow (3)
- localization (3)
- Ensemble Kalman filter (2)
- Fokker-Planck equation (2)
- multiplicative noise (2)
- nonlinear filtering (2)
Institute
- Institut für Mathematik (61) (remove)
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
We study the possibility of obtaining a computational turbulence model by means of non-dissipative regularisation of the compressible atmospheric equations for climate-type applications. We use an -regularisation (Lagrangian averaging) of the atmospheric equations. For the hydrostatic and compressible atmospheric equations discretised using a finite volume method on unstructured grids, deterministic and non-deterministic numerical experiments are conducted to compare the individual solutions and the statistics of the regularised equations to those of the original model. The impact of the regularisation parameter is investigated. Our results confirm the principal compatibility of -regularisation with atmospheric dynamics and encourage further investigations within atmospheric model including complex physical parametrisations.
Two recent works have adapted the Kalman-Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance.
We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables.
Finally, the performance of our ensemble transform Kalman-Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
The problem of an ensemble Kalman filter when only partial observations are available is considered. In particular, the situation is investigated where the observational space consists of variables that are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables a variance-limiting Kalman filter (VLKF) is derived in a variational setting. The VLKF for a simple linear toy model is analyzed and its range of optimal performance is determined. The VLKF is explored in an ensemble transform setting for the Lorenz-96 system, and it is shown that incorporating the information of the variance of some unobservable variables can improve the skill and also increase the stability of the data assimilation procedure.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.