Refine
Year of publication
Document Type
- Article (56)
- Postprint (13)
- Monograph/Edited Volume (1)
- Review (1)
Language
- English (71) (remove)
Keywords
- ensemble Kalman filter (8)
- data assimilation (7)
- Bayesian inference (6)
- Data assimilation (3)
- GNSS Reflectometry (3)
- gradient flow (3)
- localization (3)
- wind speed (3)
- DDM simulation (2)
- Ensemble Kalman filter (2)
Institute
TDS-1 GNSS Reflectometry
(2018)
This study presents the development and a systematic evaluation study of GNSS reflectometry wind speeds. After establishing a wind speed retrieval algorithm, UK TechDemoSat-1 (TDS-1) derived winds, from May 2015 to July 2017, are compared to the Advanced Scatterometer (ASCAT). ERA-Interim wind fields of the European Centre for Medium-range Weather Forecasts (ECMWF) and in situ observation from Tropical Atmosphere Ocean buoy array in the Pacific are taken as reference. One-year averaged TDS-1 global winds demonstrate small differences with ECMWF in a majority of areas as well as discuss under- and overestimations. The pioneering TDS-1 winds demonstrate a root-mean-squared error (RMSE) and bias of 2.77 and -0.33 m/s, which are comparable to the RMSE and bias derived by ASCAT winds, as large as 2.31 and 0.25 m/s, respectively. Using buoys measurements as reference, RMSE and bias of 2.23 and -0.03 m/s for TDS-1 as well as 1.40 and -0.68 m/s for ASCAT are obtained. Utilizing rain microwave-infrared estimates of the Tropical Rainfall Measuring Mission, rain-affected observation of both ASCAT and TDS-1 are collected and evaluated. Although ASCAT winds show a significant performance degradation resulting in an RMSE and bias of 3.16 and 1.03 m/s, respectively, during rain condition, TDS-1 shows a more reliable performance with an RMSE and bias of 2.94 and -0.21 m/s, respectively, which indicates the promising capability of GNSS forward scattering for wind retrievals during rain. A decrease in TDS-1-derived bistatic radar cross sections during rain events, at weak winds, is also demonstrated.
For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSS-R) observations is demonstrated. Based on the argument that the forward quasi-specular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSS-R measurements are not sensitive to the surface small-scale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section sigma(0) derived from TechDemoSat-1 data is reduced due to rain at weak winds, lower than approximate to 6 m/s. The decrease is as large as approximate to 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0-2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSS-R technique to detect precipitation over oceans at low winds.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
The efficient time integration of the dynamic core equations for numerical weather prediction (NWP) remains a key challenge. One of the most popular methods is currently provided by implementations of the semi-implicit semi-Lagrangian (SISL) method, originally proposed by Robert (J. Meteorol. Soc. Jpn., 1982). Practical implementations of the SISL method are, however, not without certain shortcomings with regard to accuracy, conservation properties and stability. Based on recent work by Gottwald, Frank and Reich (LNCSE, Springer, 2002), Frank, Reich, Staniforth, White and Wood (Atm. Sci. Lett., 2005) and Wood, Staniforth and Reich (Atm. Sci. Lett., 2006) we propose an alternative semi-Lagrangian implementation based on a set of regularized equations and the popular Stormer-Verlet time stepping method in the context of the shallow-water equations (SWEs). Ultimately, the goal is to develop practical implementations for the 3D Euler equations that overcome some or all shortcomings of current SISL implementations.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
Particle filters (also called sequential Monte Carlo methods) are widely used for state and parameter estimation problems in the context of nonlinear evolution equations. The recently proposed ensemble transform particle filter (ETPF) [S. Reich, SIAM T. Sci. Comput., 35, (2013), pp. A2013-A2014[ replaces the resampling step of a standard particle filter by a linear transformation which allows for a hybridization of particle filters with ensemble Kalman filters and renders the resulting hybrid filters applicable to spatially extended systems. However, the linear transformation step is computationally expensive and leads to an underestimation of the ensemble spread for small and moderate ensemble sizes. Here we address both of these shortcomings by developing second order accurate extensions of the ETPF. These extensions allow one in particular to replace the exact solution of a linear transport problem by its Sinkhorn approximation. It is also demonstrated that the nonlinear ensemble transform filter arises as a special case of our general framework. We illustrate the performance of the second-order accurate filters for the chaotic Lorenz-63 and Lorenz-96 models and a dynamic scene-viewing model. The numerical results for the Lorenz-63 and Lorenz-96 models demonstrate that significant accuracy improvements can be achieved in comparison to a standard ensemble Kalman filter and the ETPF for small to moderate ensemble sizes. The numerical results for the scene-viewing model reveal, on the other hand, that second-order corrections can lead to statistically inconsistent samples from the posterior parameter distribution.
Multisymplectic methods have recently been proposed as a generalization of symplectic ODE methods to the case of Hamiltonian PDEs. Their excellent long time behavior for a variety of Hamiltonian wave equations has been demonstrated in a number of numerical studies. A theoretical investigation and justification of multisymplectic methods is still largely missing. In this paper, we study linear multisymplectic PDEs and their discretization by means of numerical dispersion relations. It is found that multisymplectic methods in the sense of Bridges and Reich [Phys. Lett. A, 284 ( 2001), pp. 184-193] and Reich [J. Comput. Phys., 157 (2000), pp. 473-499], such as Gauss-Legendre Runge-Kutta methods, possess a number of desirable properties such as nonexistence of spurious roots and conservation of the sign of the group velocity. A certain CFL-type restriction on Delta t/Delta x might be required for methods higher than second order in time. It is also demonstrated by means of the explicit midpoint method that multistep methods may exhibit spurious roots in the numerical dispersion relation for any value of Delta t/Delta x despite being multisymplectic in the sense of discrete variational mechanics [J. E. Marsden, G. P. Patrick, and S. Shkoller, Commun. Math. Phys., 199 (1999), pp. 351-395]
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
We consider the numerical treatment of Hamiltonian systems that contain a potential which grows large when the system deviates from the equilibrium value of the potential. Such systems arise, e.g., in molecular dynamics simulations and the spatial discretization of Hamiltonian partial differential equations. Since the presence of highly oscillatory terms in the solutions forces any explicit integrator to use very small step size, the numerical integration of such systems provides a challenging task. It has been suggested before to replace the strong potential by a holonomic constraint that forces the solutions to stay at the equilibrium value of the potential. This approach has, e.g., been successfully applied to the bond stretching in molecular dynamics simulations. In other cases, such as the bond-angle bending, this methods fails due to the introduced rigidity. Here we give a careful analysis of the analytical problem by means of a smoothing operator. This will lead us to the notion of the smoothed dynamics of a highly oscillatory Hamiltonian system. Based on our analysis, we suggest a new constrained formulation that maintains the flexibility of the system while at the same time suppressing the high-frequency components in the solutions and thus allowing for larger time steps. The new constrained formulation is Hamiltonian and can be discretized by the well-known SHAKE method.
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
Technical and physical systems, especially electronic circuits, are frequently modeled as a system of differential and nonlinear implicit equations. In the literature such systems of equations are called differentialalgebraic equations (DAEs). It turns out that the numerical and analytical properties of a DAE depend on an integer called the index of the problem. For example, the well-known BDF method of Gear can be applied, in general, to a DAE only if the index does not exceed one. In this paper we give a geometric interpretation of higherindex DAEs and indicate problems arising in connection with such DAEs by means of several examples.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean–Vlasov equations as the starting point to derive ensemble Kalman–Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
Nonlinear data assimilation
(2015)
This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters.
The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.
Data assimilation
(2019)
Data assimilation addresses the general problem of how to combine model-based predictions with partial and noisy observations of the process in an optimal manner. This survey focuses on sequential data assimilation techniques using probabilistic particle-based algorithms. In addition to surveying recent developments for discrete- and continuous-time data assimilation, both in terms of mathematical foundations and algorithmic implementations, we also provide a unifying framework from the perspective of coupling of measures, and Schrödinger’s boundary value problem for stochastic processes in particular.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2015)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
The subject of this paper is the relation of differential-algebraic equations (DAEs) to vector fields on manifolds. For that reason, we introduce the notion of a regular DAE as a DAE to which a vector field uniquely corresponds. Furthermore, a technique is described which yields a family of manifolds for a given DAE. This socalled family of constraint manifolds allows in turn the formulation of sufficient conditions for the regularity of a DAE. and the definition of the index of a regular DAE. We also state a method for the reduction of higher-index DAEs to lowsr-index ones that can be solved without introducing additional constants of integration. Finally, the notion of realizability of a given vector field by a regular DAE is introduced, and it is shown that any vector field can be realized by a regular DAE. Throughout this paper the problem of path-tracing is discussed as an illustration of the mathematical phenomena.
In this paper, we show that symplectic partitioned Runge-Kutta methods conserve momentum maps corresponding to linear symmetry groups acting on the phase space of Hamiltonian differential equations by extended point transformation. We also generalize this result to constrained systems and show how this conservation property relates to the symplectic integration of Lie-Poisson systems on certain submanifolds of the general matrix group GL(n).
A theoretical famework for the investigation of the qualitative behavior of differential-algebraic equations (DAEs) near an equilibrium point is established. The key notion of our approach is the notion of regularity. A DAE is called regular locally around an equilibrium point if there is a unique vector field such that the solutions of the DAE and the vector field are in one-to-one correspondence in a neighborhood of this equili Drium point. Sufficient conditions for the regularity of an equilibrium point are stated. This in turn allows us to translate several local results, as formulated for vector fields, to DAEs that are regular locally around a g: ven equilibrium point (e.g. Local Stable and Unstable Manifold Theorem, Hopf theorem). It is important that ihese theorems are stated in terms of the given problem and not in terms of the corresponding vector field.
An existence and uniqueness theory is developed for general nonlinear and nonautonomous differential-algebraic equations (DAEs) by exploiting their underlying differential-geometric structure. A DAE is called regular if there is a unique nonautonomous vector field such that the solutions of the DAE and the solutions of the vector field are in one-to-one correspondence. Sufficient conditions for regularity of a DAE are derived in terms of constrained manifolds. Based on this differential-geometric characterization, existence and uniqueness results are stated for regular DAEs. Furthermore, our not ons are compared with techniques frequently used in the literature such as index and solvability. The results are illustrated in detail by means of a simple circuit example.
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework, a neural network consisting of random feature maps is trained sequentially by incoming observations within a data assimilation procedure. By employing Takens's embedding theorem, the network is trained on delay coordinates. We show that the combination of random feature maps and data assimilation, called RAFDA, outperforms standard random feature maps for which the dynamics is learned using batch data.
The problem of an ensemble Kalman filter when only partial observations are available is considered. In particular, the situation is investigated where the observational space consists of variables that are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables a variance-limiting Kalman filter (VLKF) is derived in a variational setting. The VLKF for a simple linear toy model is analyzed and its range of optimal performance is determined. The VLKF is explored in an ensemble transform setting for the Lorenz-96 system, and it is shown that incorporating the information of the variance of some unobservable variables can improve the skill and also increase the stability of the data assimilation procedure.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data -driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems. (C) 2021 Elsevier B.V. All rights reserved.
Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models.
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
Bayesian inference can be embedded into an appropriately defined dynamics in the space of probability measures. In this paper, we take Brownian motion and its associated Fokker-Planck equation as a starting point for such embeddings and explore several interacting particle approximations. More specifically, we consider both deterministic and stochastic interacting particle systems and combine them with the idea of preconditioning by the empirical covariance matrix. In addition to leading to affine invariant formulations which asymptotically speed up convergence, preconditioning allows for gradient-free implementations in the spirit of the ensemble Kalman filter. While such gradient-free implementations have been demonstrated to work well for posterior measures that are nearly Gaussian, we extend their scope of applicability to multimodal measures by introducing localized gradient-free approximations. Numerical results demonstrate the effectiveness of the considered methodologies.
Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, this filter also has limitations due to its inherent assumptions of Gaussianity and linearity, which can manifest themselves in the form of dynamically inconsistent state estimates. This issue is investigated here for balanced, slowly evolving solutions to highly oscillatory Hamiltonian systems which are prototypical for applications in numerical weather prediction. It is demonstrated that the standard ensemble Kalman filter can lead to state estimates that do not satisfy the pertinent balance relations and ultimately lead to filter divergence. Two remedies are proposed, one in terms of blended asymptotically consistent time-stepping schemes, and one in terms of minimization-based postprocessing methods. The effects of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for balanced motions of two prototypical Hamiltonian reference systems.
Global numerical weather prediction (NWP) models have begun to resolve the mesoscale k(-5/3) range of the energy spectrum, which is known to impose an inherently finite range of deterministic predictability per se as errors develop more rapidly on these scales than on the larger scales. However, the dynamics of these errors under the influence of the synoptic-scale k(-3) range is little studied. Within a perfect-model context, the present work examines the error growth behavior under such a hybrid spectrum in Lorenz's original model of 1969, and in a series of identical-twin perturbation experiments using an idealized two-dimensional barotropic turbulence model at a range of resolutions. With the typical resolution of today's global NWP ensembles, error growth remains largely uniform across scales. The theoretically expected fast error growth characteristic of a k(-5/3) spectrum is seen to be largely suppressed in the first decade of the mesoscale range by the synoptic-scale k(-3) range. However, it emerges once models become fully able to resolve features on something like a 20-km scale, which corresponds to a grid resolution on the order of a few kilometers.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
We present a Monte Carlo technique for sampling from the canonical distribution in molecular dynamics. The method is built upon the Nose-Hoover constant temperature formulation and the generalized hybrid Monte Carlo method. In contrast to standard hybrid Monte Carlo methods only the thermostat degree of freedom is stochastically resampled during a Monte Carlo step.
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables.
We consider the problem of propagating an ensemble of solutions and its characterization in terms of its mean and covariance matrix. We propose differential equations that lead to a continuous matrix factorization of the ensemble into a generalized singular value decomposition (SVD). The continuous factorization is applied to ensemble propagation under periodic rescaling (ensemble breeding) and under periodic Kalman analysis steps (ensemble Kalman filter). We also use the continuous matrix factorization to perform a re-orthogonalization of the ensemble after each time-step and apply the resulting modified ensemble propagation algorithm to the ensemble Kalman filter. Results from the Lorenz-96 model indicate that the re-orthogonalization of the ensembles leads to improved filter performance.
We introduce a new mixed finite element for solving the 2- and 3-dimensional wave equations and equations of incompressible flow. The element, which we refer to as P1(D)-P2, uses discontinuous piecewise linear functions for velocity and continuous piecewise quadratic functions for pressure. The aim of introducing the mixed formulation is to produce a new flexible element choice for triangular and tetrahedral meshes which satisfies the LBB stability condition and hence has no spurious zero-energy modes. The advantage of this particular element choice is that the mass matrix for velocity is block diagonal so it can be trivially inverted; it also allows the order of the pressure to be increased to quadratic whilst maintaining LBB stability which has benefits in geophysical applications with Coriolis forces. We give a normal mode analysis of the semi-discrete wave equation in one dimension which shows that the element pair is stable, and demonstrate that the element is stable with numerical integrations of the wave equation in two dimensions, an analysis of the resultant discrete Laplace operator in two and three dimensions on various meshes which shows that the element pair does not have any spurious modes. We provide convergence tests for the element pair which confirm that the element is stable since the convergence rate of the numerical solution is quadratic.
Ensemble Kalman filter techniques are widely used to assimilate observations into dynamical models. The phase- space dimension is typically much larger than the number of ensemble members, which leads to inaccurate results in the computed covariance matrices. These inaccuracies can lead, among other things, to spurious long-range correlations, which can be eliminated by Schur-product-based localization techniques. In this article, we propose a new technique for implementing such localization techniques within the class of ensemble transform/square-root Kalman filters. Our approach relies on a continuous embedding of the Kalman filter update for the ensemble members, i.e. we state an ordinary differential equation (ODE) with solutions that, over a unit time interval, are equivalent to the Kalman filter update. The ODE formulation forms a gradient system with the observations as a cost functional. Besides localization, the new ODE ensemble formulation should also find useful application in the context of nonlinear observation operators and observations that arrive continuously in time.
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high-frequency adjustment processes in the model dynamics. Various methods have been devised to spread out the analysis increments continuously over a fixed time interval centred about the analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
The paper provides an introduction and survey of conservative discretization methods for Hamiltonian partial differential equations. The emphasis is on variational, symplectic and multi-symplectic methods. The derivation of methods as well as some of their fundamental geometric properties are discussed. Basic principles are illustrated by means of examples from wave and fluid dynamics
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.