Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (53)
- Postprint (13)
- Monographie/Sammelband (1)
- Rezension (1)
Sprache
- Englisch (68) (entfernen)
Schlagworte
- data assimilation (7)
- ensemble Kalman filter (7)
- Bayesian inference (4)
- Data assimilation (3)
- GNSS Reflectometry (3)
- gradient flow (3)
- localization (3)
- wind speed (3)
- DDM simulation (2)
- Ensemble Kalman filter (2)
- Fokker-Planck equation (2)
- continuous-time data assimilation (2)
- correlated noise (2)
- eye movements (2)
- multi-scale diffusion processes (2)
- multiplicative noise (2)
- nonlinear filtering (2)
- optimal transport (2)
- parameter estimation (2)
- rain attenuation (2)
- rain effect (2)
- sequential data assimilation (2)
- Advanced scatterometer (ASCAT) (1)
- Atmosphere (1)
- Bayesian inverse problems (1)
- COVID-19 (1)
- CRPS (1)
- Data-driven modelling (1)
- Dynamical systems (1)
- Ensemble Kalman (1)
- Ensemble Kalman Filter (1)
- Error analysis (1)
- European Centre for Medium-Range Weather Forecasts (ECMWF) (1)
- Force splitting (1)
- Fuzzy logic (1)
- GNSS forward scatterometry (1)
- GNSS reflectometry (1)
- Gaussian kernel estimators (1)
- Gaussian mixtures (1)
- Generalized hybrid Monte Carlo (1)
- Hamiltonian dynamics (1)
- Kalman Bucy filter (1)
- Kalman filter (1)
- Kalman-Bucy Filter (1)
- Lagrangian modeling (1)
- Lagrangian modelling (1)
- Lagrangian-averaged equations (1)
- Langevin dynamics (1)
- MCMC (1)
- MCMC modelling (1)
- McKean-Vlasov (1)
- Modified Hamiltonians (1)
- Molecular dynamics (1)
- Mollification (1)
- Monte Carlo method (1)
- Multigrid (1)
- Multiple time stepping (1)
- NWP (1)
- Nonlinear filters (1)
- Numerical weather prediction (1)
- Optimal transportation (1)
- Paleoclimate reconstruction (1)
- Poincare inequality (1)
- Proxy forward modeling (1)
- RMSE (1)
- Random feature maps (1)
- Sequential data assimilation (1)
- Sinkhorn approximation (1)
- Spectral analysis (1)
- Stochastic epidemic model (1)
- Stormer-Verlet method (1)
- Strike-slip fault model (1)
- TDS-1 (1)
- TechDemoSat-1 (TDS-1) (1)
- Turbulence (1)
- accuracy (1)
- adaptive (1)
- affine (1)
- affine invariance (1)
- asymptotic behavior (1)
- balanced dynamics (1)
- canonical discretization schemes (1)
- chemistry (1)
- climate reconstructions (1)
- co-limitation (1)
- conservative discretization (1)
- constrained Hamiltonian systems (1)
- convergence assessment (1)
- differential-algebraic equations (1)
- distribution (1)
- dynamical model (1)
- dynamical models (1)
- electromagnetic scattering (1)
- ensemble (1)
- ensembles (1)
- feedback particle filter (1)
- filter (1)
- fluid mechanics (1)
- forecasting (1)
- framework (1)
- fuzzy logic (1)
- geophysics (1)
- gradient-free (1)
- gradient-free sampling methods (1)
- high resolution paleoclimatology (1)
- highly (1)
- holonomic constraints (1)
- hybrids (1)
- hydrostatic atmosphere (1)
- idealised turbulence (1)
- interacting particle systems (1)
- interacting particles (1)
- interindividual differences (1)
- invariance (1)
- likelihood (1)
- likelihood function (1)
- limiting factors (1)
- linear programming (1)
- linearly implicit time stepping methods (1)
- mean-field equations (1)
- mesoscale forecasting (1)
- model comparison (1)
- model fitting (1)
- models (1)
- multilevel Monte Carlo (1)
- non-dissipative regularisations (1)
- nonlinear data assimilation (1)
- numerical analysis/modeling (1)
- numerical weather prediction (1)
- numerical weather prediction/forecasting (1)
- ocean surface (1)
- oscillatory systems (1)
- paleoclimate reconstruction (1)
- particle filter (1)
- particle filters (1)
- proposal densities (1)
- proxy forward modeling (1)
- rain detection (1)
- rain splash (1)
- reading (1)
- reanalysis (1)
- regularization (1)
- resampling (1)
- saccades (1)
- semi-Lagrangian method (1)
- shallow-water equations (1)
- short-range prediction (1)
- smoother (1)
- sparse proxy data (1)
- spread correction (1)
- stability (1)
- stiff ODE (1)
- stochastic differential equations (1)
- stochastic systems (1)
- symplectic methods (1)
- temporal discretization (1)
- transdimensional inversion (1)
- transformations (1)
- variability (1)
- verification (1)
- weight-based formulations (1)
- well-posedness (1)
TDS-1 GNSS Reflectometry
(2018)
This study presents the development and a systematic evaluation study of GNSS reflectometry wind speeds. After establishing a wind speed retrieval algorithm, UK TechDemoSat-1 (TDS-1) derived winds, from May 2015 to July 2017, are compared to the Advanced Scatterometer (ASCAT). ERA-Interim wind fields of the European Centre for Medium-range Weather Forecasts (ECMWF) and in situ observation from Tropical Atmosphere Ocean buoy array in the Pacific are taken as reference. One-year averaged TDS-1 global winds demonstrate small differences with ECMWF in a majority of areas as well as discuss under- and overestimations. The pioneering TDS-1 winds demonstrate a root-mean-squared error (RMSE) and bias of 2.77 and -0.33 m/s, which are comparable to the RMSE and bias derived by ASCAT winds, as large as 2.31 and 0.25 m/s, respectively. Using buoys measurements as reference, RMSE and bias of 2.23 and -0.03 m/s for TDS-1 as well as 1.40 and -0.68 m/s for ASCAT are obtained. Utilizing rain microwave-infrared estimates of the Tropical Rainfall Measuring Mission, rain-affected observation of both ASCAT and TDS-1 are collected and evaluated. Although ASCAT winds show a significant performance degradation resulting in an RMSE and bias of 3.16 and 1.03 m/s, respectively, during rain condition, TDS-1 shows a more reliable performance with an RMSE and bias of 2.94 and -0.21 m/s, respectively, which indicates the promising capability of GNSS forward scattering for wind retrievals during rain. A decrease in TDS-1-derived bistatic radar cross sections during rain events, at weak winds, is also demonstrated.
For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSS-R) observations is demonstrated. Based on the argument that the forward quasi-specular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSS-R measurements are not sensitive to the surface small-scale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section sigma(0) derived from TechDemoSat-1 data is reduced due to rain at weak winds, lower than approximate to 6 m/s. The decrease is as large as approximate to 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0-2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSS-R technique to detect precipitation over oceans at low winds.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
The efficient time integration of the dynamic core equations for numerical weather prediction (NWP) remains a key challenge. One of the most popular methods is currently provided by implementations of the semi-implicit semi-Lagrangian (SISL) method, originally proposed by Robert (J. Meteorol. Soc. Jpn., 1982). Practical implementations of the SISL method are, however, not without certain shortcomings with regard to accuracy, conservation properties and stability. Based on recent work by Gottwald, Frank and Reich (LNCSE, Springer, 2002), Frank, Reich, Staniforth, White and Wood (Atm. Sci. Lett., 2005) and Wood, Staniforth and Reich (Atm. Sci. Lett., 2006) we propose an alternative semi-Lagrangian implementation based on a set of regularized equations and the popular Stormer-Verlet time stepping method in the context of the shallow-water equations (SWEs). Ultimately, the goal is to develop practical implementations for the 3D Euler equations that overcome some or all shortcomings of current SISL implementations.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman-Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman-Bucy filter is consistent with the classic Kalman-Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system.
Particle filters (also called sequential Monte Carlo methods) are widely used for state and parameter estimation problems in the context of nonlinear evolution equations. The recently proposed ensemble transform particle filter (ETPF) [S. Reich, SIAM T. Sci. Comput., 35, (2013), pp. A2013-A2014[ replaces the resampling step of a standard particle filter by a linear transformation which allows for a hybridization of particle filters with ensemble Kalman filters and renders the resulting hybrid filters applicable to spatially extended systems. However, the linear transformation step is computationally expensive and leads to an underestimation of the ensemble spread for small and moderate ensemble sizes. Here we address both of these shortcomings by developing second order accurate extensions of the ETPF. These extensions allow one in particular to replace the exact solution of a linear transport problem by its Sinkhorn approximation. It is also demonstrated that the nonlinear ensemble transform filter arises as a special case of our general framework. We illustrate the performance of the second-order accurate filters for the chaotic Lorenz-63 and Lorenz-96 models and a dynamic scene-viewing model. The numerical results for the Lorenz-63 and Lorenz-96 models demonstrate that significant accuracy improvements can be achieved in comparison to a standard ensemble Kalman filter and the ETPF for small to moderate ensemble sizes. The numerical results for the scene-viewing model reveal, on the other hand, that second-order corrections can lead to statistically inconsistent samples from the posterior parameter distribution.
Multisymplectic methods have recently been proposed as a generalization of symplectic ODE methods to the case of Hamiltonian PDEs. Their excellent long time behavior for a variety of Hamiltonian wave equations has been demonstrated in a number of numerical studies. A theoretical investigation and justification of multisymplectic methods is still largely missing. In this paper, we study linear multisymplectic PDEs and their discretization by means of numerical dispersion relations. It is found that multisymplectic methods in the sense of Bridges and Reich [Phys. Lett. A, 284 ( 2001), pp. 184-193] and Reich [J. Comput. Phys., 157 (2000), pp. 473-499], such as Gauss-Legendre Runge-Kutta methods, possess a number of desirable properties such as nonexistence of spurious roots and conservation of the sign of the group velocity. A certain CFL-type restriction on Delta t/Delta x might be required for methods higher than second order in time. It is also demonstrated by means of the explicit midpoint method that multistep methods may exhibit spurious roots in the numerical dispersion relation for any value of Delta t/Delta x despite being multisymplectic in the sense of discrete variational mechanics [J. E. Marsden, G. P. Patrick, and S. Shkoller, Commun. Math. Phys., 199 (1999), pp. 351-395]
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
We consider the numerical treatment of Hamiltonian systems that contain a potential which grows large when the system deviates from the equilibrium value of the potential. Such systems arise, e.g., in molecular dynamics simulations and the spatial discretization of Hamiltonian partial differential equations. Since the presence of highly oscillatory terms in the solutions forces any explicit integrator to use very small step size, the numerical integration of such systems provides a challenging task. It has been suggested before to replace the strong potential by a holonomic constraint that forces the solutions to stay at the equilibrium value of the potential. This approach has, e.g., been successfully applied to the bond stretching in molecular dynamics simulations. In other cases, such as the bond-angle bending, this methods fails due to the introduced rigidity. Here we give a careful analysis of the analytical problem by means of a smoothing operator. This will lead us to the notion of the smoothed dynamics of a highly oscillatory Hamiltonian system. Based on our analysis, we suggest a new constrained formulation that maintains the flexibility of the system while at the same time suppressing the high-frequency components in the solutions and thus allowing for larger time steps. The new constrained formulation is Hamiltonian and can be discretized by the well-known SHAKE method.
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
Technical and physical systems, especially electronic circuits, are frequently modeled as a system of differential and nonlinear implicit equations. In the literature such systems of equations are called differentialalgebraic equations (DAEs). It turns out that the numerical and analytical properties of a DAE depend on an integer called the index of the problem. For example, the well-known BDF method of Gear can be applied, in general, to a DAE only if the index does not exceed one. In this paper we give a geometric interpretation of higherindex DAEs and indicate problems arising in connection with such DAEs by means of several examples.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean–Vlasov equations as the starting point to derive ensemble Kalman–Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high-dimensional geoscience systems has been limited due to their inefficiency in high-dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state-of-the-art discussion of present efforts of developing particle filters for high-dimensional nonlinear geoscience state-estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo-code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present-day methods for numerical weather prediction, suggesting that they will become mainstream soon.
Nonlinear data assimilation
(2015)
This book contains two review articles on nonlinear data assimilation that deal with closely related topics but were written and can be read independently. Both contributions focus on so-called particle filters.
The first contribution by Jan van Leeuwen focuses on the potential of proposal densities. It discusses the issues with present-day particle filters and explorers new ideas for proposal densities to solve them, converging to particle filters that work well in systems of any dimension, closing the contribution with a high-dimensional example. The second contribution by Cheng and Reich discusses a unified framework for ensemble-transform particle filters. This allows one to bridge successful ensemble Kalman filters with fully nonlinear particle filters, and allows a proper introduction of localization in particle filters, which has been lacking up to now.
Data assimilation
(2019)
Data assimilation addresses the general problem of how to combine model-based predictions with partial and noisy observations of the process in an optimal manner. This survey focuses on sequential data assimilation techniques using probabilistic particle-based algorithms. In addition to surveying recent developments for discrete- and continuous-time data assimilation, both in terms of mathematical foundations and algorithmic implementations, we also provide a unifying framework from the perspective of coupling of measures, and Schrödinger’s boundary value problem for stochastic processes in particular.
The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
We propose a computational method (with acronym ALDI) for sampling from a given target distribution based on first-order (overdamped) Langevin dynamics which satisfies the property of affine invariance. The central idea of ALDI is to run an ensemble of particles with their empirical covariance serving as a preconditioner for their underlying Langevin dynamics. ALDI does not require taking the inverse or square root of the empirical covariance matrix, which enables application to high-dimensional sampling problems. The theoretical properties of ALDI are studied in terms of nondegeneracy and ergodicity. Furthermore, we study its connections to diffusion on Riemannian manifolds and Wasserstein gradient flows. Bayesian inference serves as a main application area for ALDI. In case of a forward problem with additive Gaussian measurement errors, ALDI allows for a gradient-free approximation in the spirit of the ensemble Kalman filter. A computational comparison between gradient-free and gradient-based ALDI is provided for a PDE constrained Bayesian inverse problem.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2015)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.
We present a Monte Carlo technique for sampling from the canonical distribution in molecular dynamics. The method is built upon the Nose-Hoover constant temperature formulation and the generalized hybrid Monte Carlo method. In contrast to standard hybrid Monte Carlo methods only the thermostat degree of freedom is stochastically resampled during a Monte Carlo step.
Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables.
We consider the problem of propagating an ensemble of solutions and its characterization in terms of its mean and covariance matrix. We propose differential equations that lead to a continuous matrix factorization of the ensemble into a generalized singular value decomposition (SVD). The continuous factorization is applied to ensemble propagation under periodic rescaling (ensemble breeding) and under periodic Kalman analysis steps (ensemble Kalman filter). We also use the continuous matrix factorization to perform a re-orthogonalization of the ensemble after each time-step and apply the resulting modified ensemble propagation algorithm to the ensemble Kalman filter. Results from the Lorenz-96 model indicate that the re-orthogonalization of the ensembles leads to improved filter performance.
We introduce a new mixed finite element for solving the 2- and 3-dimensional wave equations and equations of incompressible flow. The element, which we refer to as P1(D)-P2, uses discontinuous piecewise linear functions for velocity and continuous piecewise quadratic functions for pressure. The aim of introducing the mixed formulation is to produce a new flexible element choice for triangular and tetrahedral meshes which satisfies the LBB stability condition and hence has no spurious zero-energy modes. The advantage of this particular element choice is that the mass matrix for velocity is block diagonal so it can be trivially inverted; it also allows the order of the pressure to be increased to quadratic whilst maintaining LBB stability which has benefits in geophysical applications with Coriolis forces. We give a normal mode analysis of the semi-discrete wave equation in one dimension which shows that the element pair is stable, and demonstrate that the element is stable with numerical integrations of the wave equation in two dimensions, an analysis of the resultant discrete Laplace operator in two and three dimensions on various meshes which shows that the element pair does not have any spurious modes. We provide convergence tests for the element pair which confirm that the element is stable since the convergence rate of the numerical solution is quadratic.
Ensemble Kalman filter techniques are widely used to assimilate observations into dynamical models. The phase- space dimension is typically much larger than the number of ensemble members, which leads to inaccurate results in the computed covariance matrices. These inaccuracies can lead, among other things, to spurious long-range correlations, which can be eliminated by Schur-product-based localization techniques. In this article, we propose a new technique for implementing such localization techniques within the class of ensemble transform/square-root Kalman filters. Our approach relies on a continuous embedding of the Kalman filter update for the ensemble members, i.e. we state an ordinary differential equation (ODE) with solutions that, over a unit time interval, are equivalent to the Kalman filter update. The ODE formulation forms a gradient system with the observations as a cost functional. Besides localization, the new ODE ensemble formulation should also find useful application in the context of nonlinear observation operators and observations that arrive continuously in time.
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high-frequency adjustment processes in the model dynamics. Various methods have been devised to spread out the analysis increments continuously over a fixed time interval centred about the analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
The paper provides an introduction and survey of conservative discretization methods for Hamiltonian partial differential equations. The emphasis is on variational, symplectic and multi-symplectic methods. The derivation of methods as well as some of their fundamental geometric properties are discussed. Basic principles are illustrated by means of examples from wave and fluid dynamics
We develop a multigrid, multiple time stepping scheme to reduce computational efforts for calculating complex stress interactions in a strike-slip 2D planar fault for the simulation of seismicity. The key elements of the multilevel solver are separation of length scale, grid-coarsening, and hierarchy. In this study the complex stress interactions are split into two parts: the first with a small contribution is computed on a coarse level, and the rest for strong interactions is on a fine level. This partition leads to a significant reduction of the number of computations. The reduction of complexity is even enhanced by combining the multigrid with multiple time stepping. Computational efficiency is enhanced by a factor of 10 while retaining a reasonable accuracy, compared to the original full matrix-vortex multiplication. The accuracy of solution and computational efficiency depend on a given cut-off radius that splits multiplications into the two parts. The multigrid scheme is constructed in such a way that it conserves stress in the entire half-space.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
We consider the problem of discrete time filtering (intermittent data assimilation) for differential equation models and discuss methods for its numerical approximation. The focus is on methods based on ensemble/particle techniques and on the ensemble Kalman filter technique in particular. We summarize as well as extend recent work on continuous ensemble Kalman filter formulations, which provide a concise dynamical systems formulation of the combined dynamics-assimilation problem. Possible extensions to fully nonlinear ensemble/particle based filters are also outlined using the framework of optimal transportation theory.
Atomic oscillations present in classical molecular dynamics restrict the step size that can be used. Multiple time stepping schemes offer only modest improvements, and implicit integrators are costly and inaccurate. The best approach may be to actually remove the highest frequency oscillations by constraining bond lengths and bond angles, thus permitting perhaps a 4-fold increase in the step size. However, omitting degrees of freedom produces errors in statistical averages, and rigid angles do not bend for strong excluded volume forces. These difficulties can be addressed by an enhanced treatment of holonomic constrained dynamics using ideas from papers of Fixman (1974) and Reich (1995, 1999). In particular, the 1995 paper proposes the use of "flexible" constraints, and the 1999 paper uses a modified potential energy function with rigid constraints to emulate flexible constraints. Presented here is a more direct and rigorous derivation of the latter approach, together with justification for the use of constraints in molecular modeling. With rigor comes limitations, so practical compromises are proposed: simplifications of the equations and their judicious application when assumptions are violated. Included are suggestions for new approaches.
Two recent works have adapted the Kalman-Bucy filter into an ensemble setting. In the first formulation, the ensemble of perturbations is updated by the solution of an ordinary differential equation (ODE) in pseudo-time, while the mean is updated as in the standard Kalman filter. In the second formulation, the full ensemble is updated in the analysis step as the solution of single set of ODEs in pseudo-time. Neither requires matrix inversions except for the frequently diagonal observation error covariance.
We analyse the behaviour of the ODEs involved in these formulations. We demonstrate that they stiffen for large magnitudes of the ratio of background error to observational error variance, and that using the integration scheme proposed in both formulations can lead to failure. A numerical integration scheme that is both stable and is not computationally expensive is proposed. We develop transform-based alternatives for these Bucy-type approaches so that the integrations are computed in ensemble space where the variables are weights (of dimension equal to the ensemble size) rather than model variables.
Finally, the performance of our ensemble transform Kalman-Bucy implementations is evaluated using three models: the 3-variable Lorenz 1963 model, the 40-variable Lorenz 1996 model, and a medium complexity atmospheric general circulation model known as SPEEDY. The results from all three models are encouraging and warrant further exploration of these assimilation techniques.
The ensemble Kalman filter has emerged as a promising filter algorithm for nonlinear differential equations subject to intermittent observations. In this paper, we extend the well-known Kalman-Bucy filter for linear differential equations subject to continous observations to the ensemble setting and nonlinear differential equations. The proposed filter is called the ensemble Kalman-Bucy filter and its performance is demonstrated for a simple mechanical model (Langevin dynamics) subject to incremental observations of its velocity.
We generalize the popular ensemble Kalman filter to an ensemble transform filter, in which the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian-mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions and the three-dimensional Lorenz-63 model). It is demonstrated that the EGMF is capable of tracking systems with non-Gaussian uni- and multimodal ensemble distributions.
Towards the assimilation of tree-ring-width records using ensemble Kalman filtering techniques
(2016)
This paper investigates the applicability of the Vaganov–Shashkin–Lite (VSL) forward model for tree-ring-width chronologies as observation operator within a proxy data assimilation (DA) setting. Based on the principle of limiting factors, VSL combines temperature and moisture time series in a nonlinear fashion to obtain simulated TRW chronologies. When used as observation operator, this modelling approach implies three compounding, challenging features: (1) time averaging, (2) “switching recording” of 2 variables and (3) bounded response windows leading to “thresholded response”. We generate pseudo-TRW observations from a chaotic 2-scale dynamical system, used as a cartoon of the atmosphere-land system, and attempt to assimilate them via ensemble Kalman filtering techniques. Results within our simplified setting reveal that VSL’s nonlinearities may lead to considerable loss of assimilation skill, as compared to the utilization of a time-averaged (TA) linear observation operator. In order to understand this undesired effect, we embed VSL’s formulation into the framework of fuzzy logic (FL) theory, which thereby exposes multiple representations of the principle of limiting factors. DA experiments employing three alternative growth rate functions disclose a strong link between the lack of smoothness of the growth rate function and the loss of optimality in the estimate of the TA state. Accordingly, VSL’s performance as observation operator can be enhanced by resorting to smoother FL representations of the principle of limiting factors. This finding fosters new interpretations of tree-ring-growth limitation processes.
This paper extends the multilevel Monte Carlo variance reduction technique to nonlinear filtering. In particular, multilevel Monte Carlo is applied to a certain variant of the particle filter, the ensemble transform particle filter (EPTF). A key aspect is the use of optimal transport methods to re-establish correlation between coarse and fine ensembles after resampling; this controls the variance of the estimator. Numerical examples present a proof of concept of the effectiveness of the proposed method, demonstrating significant computational cost reductions (relative to the single-level ETPF counterpart) in the propagation of ensembles.
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
We study the possibility of obtaining a computational turbulence model by means of non-dissipative regularisation of the compressible atmospheric equations for climate-type applications. We use an -regularisation (Lagrangian averaging) of the atmospheric equations. For the hydrostatic and compressible atmospheric equations discretised using a finite volume method on unstructured grids, deterministic and non-deterministic numerical experiments are conducted to compare the individual solutions and the statistics of the regularised equations to those of the original model. The impact of the regularisation parameter is investigated. Our results confirm the principal compatibility of -regularisation with atmospheric dynamics and encourage further investigations within atmospheric model including complex physical parametrisations.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
Assimilation of pseudo-tree-ring-width observations into an atmospheric general circulation model
(2017)
Paleoclimate data assimilation (DA) is a promising technique to systematically combine the information from climate model simulations and proxy records. Here, we investigate the assimilation of tree-ring-width (TRW) chronologies into an atmospheric global climate model using ensemble Kalman filter (EnKF) techniques and a process-based tree-growth forward model as an observation operator. Our results, within a perfect-model experiment setting, indicate that the "online DA" approach did not outperform the "off-line" one, despite its considerable additional implementation complexity. On the other hand, it was observed that the nonlinear response of tree growth to surface temperature and soil moisture does deteriorate the operation of the time-averaged EnKF methodology. Moreover, for the first time we show that this skill loss appears significantly sensitive to the structure of the growth rate function, used to represent the principle of limiting factors (PLF) within the forward model. In general, our experiments showed that the error reduction achieved by assimilating pseudo-TRW chronologies is modulated by the magnitude of the yearly internal variability in themodel. This result might help the dendrochronology community to optimize their sampling efforts.
The subject of this paper is the relation of differential-algebraic equations (DAEs) to vector fields on manifolds. For that reason, we introduce the notion of a regular DAE as a DAE to which a vector field uniquely corresponds. Furthermore, a technique is described which yields a family of manifolds for a given DAE. This socalled family of constraint manifolds allows in turn the formulation of sufficient conditions for the regularity of a DAE. and the definition of the index of a regular DAE. We also state a method for the reduction of higher-index DAEs to lowsr-index ones that can be solved without introducing additional constants of integration. Finally, the notion of realizability of a given vector field by a regular DAE is introduced, and it is shown that any vector field can be realized by a regular DAE. Throughout this paper the problem of path-tracing is discussed as an illustration of the mathematical phenomena.