Refine
Year of publication
Document Type
- Article (53)
- Postprint (14)
- Monograph/Edited Volume (1)
- Review (1)
Keywords
- data assimilation (7)
- ensemble Kalman filter (7)
- Bayesian inference (4)
- Data assimilation (3)
- GNSS Reflectometry (3)
- gradient flow (3)
- localization (3)
- wind speed (3)
- DDM simulation (2)
- Ensemble Kalman filter (2)
Institute
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.
Ensemble Kalman filter techniques are widely used to assimilate observations into dynamical models. The phase- space dimension is typically much larger than the number of ensemble members, which leads to inaccurate results in the computed covariance matrices. These inaccuracies can lead, among other things, to spurious long-range correlations, which can be eliminated by Schur-product-based localization techniques. In this article, we propose a new technique for implementing such localization techniques within the class of ensemble transform/square-root Kalman filters. Our approach relies on a continuous embedding of the Kalman filter update for the ensemble members, i.e. we state an ordinary differential equation (ODE) with solutions that, over a unit time interval, are equivalent to the Kalman filter update. The ODE formulation forms a gradient system with the observations as a cost functional. Besides localization, the new ODE ensemble formulation should also find useful application in the context of nonlinear observation operators and observations that arrive continuously in time.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
We present a Monte Carlo technique for sampling from the canonical distribution in molecular dynamics. The method is built upon the Nose-Hoover constant temperature formulation and the generalized hybrid Monte Carlo method. In contrast to standard hybrid Monte Carlo methods only the thermostat degree of freedom is stochastically resampled during a Monte Carlo step.
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high-frequency adjustment processes in the model dynamics. Various methods have been devised to spread out the analysis increments continuously over a fixed time interval centred about the analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
In diesem Beitrag wird der Zusammenhang zwischen Algebrodifferentialgleichungen (ADGL) und Vektorfeldern auf Mannigfaltigkeiten untersucht. Dazu wird zunächst der Begriff der regulären ADGL eingeführt, wobei unter eirter regulären ADGL eine ADGL verstanden wird, deren Lösungsmenge identisch mit der Lösungsmenge eines Vektorfeldes ist. Ausgehend von bekannten Aussagen über die Lösungsmenge eines Vektorfeldes werden analoge Aussagen für die Lösungsmenge einer regulären ADGL abgeleitet. Es wird eine Reduktionsmethode angegeben, die auf ein Kriterium für die Begularität einer ADGL und auf die Definition des Index einer nichtlinearen ADGL führt. Außerdem wird gezeigt, daß beliebige Vektorfelder durch reguläre ADGL so realisiert werden können, daß die Lösungsmenge des Vektorfeldes mit der der realisierenden ADGL identisch ist. Abschließend werden die für autonome ADGL gewonnenen Aussagen auf den Fall der nichtautonomen ADGL übertragen.
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, this filter also has limitations due to its inherent assumptions of Gaussianity and linearity, which can manifest themselves in the form of dynamically inconsistent state estimates. This issue is investigated here for balanced, slowly evolving solutions to highly oscillatory Hamiltonian systems which are prototypical for applications in numerical weather prediction. It is demonstrated that the standard ensemble Kalman filter can lead to state estimates that do not satisfy the pertinent balance relations and ultimately lead to filter divergence. Two remedies are proposed, one in terms of blended asymptotically consistent time-stepping schemes, and one in terms of minimization-based postprocessing methods. The effects of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for balanced motions of two prototypical Hamiltonian reference systems.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSS-R) observations is demonstrated. Based on the argument that the forward quasi-specular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSS-R measurements are not sensitive to the surface small-scale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section sigma(0) derived from TechDemoSat-1 data is reduced due to rain at weak winds, lower than approximate to 6 m/s. The decrease is as large as approximate to 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0-2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSS-R technique to detect precipitation over oceans at low winds.
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.