Refine
Year of publication
Document Type
- Article (53)
- Postprint (14)
- Monograph/Edited Volume (1)
- Review (1)
Keywords
- data assimilation (7)
- ensemble Kalman filter (7)
- Bayesian inference (4)
- Data assimilation (3)
- GNSS Reflectometry (3)
- gradient flow (3)
- localization (3)
- wind speed (3)
- DDM simulation (2)
- Ensemble Kalman filter (2)
Institute
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.
Interacting particle solutions of Fokker–Planck equations through gradient–log–density estimation
(2020)
Fokker-Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker-Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker-Planck equations in low and moderate dimensions. The proposed gradient-log-density estimator is also of independent interest, for example, in the context of optimal control.
Global numerical weather prediction (NWP) models have begun to resolve the mesoscale k(-5/3) range of the energy spectrum, which is known to impose an inherently finite range of deterministic predictability per se as errors develop more rapidly on these scales than on the larger scales. However, the dynamics of these errors under the influence of the synoptic-scale k(-3) range is little studied. Within a perfect-model context, the present work examines the error growth behavior under such a hybrid spectrum in Lorenz's original model of 1969, and in a series of identical-twin perturbation experiments using an idealized two-dimensional barotropic turbulence model at a range of resolutions. With the typical resolution of today's global NWP ensembles, error growth remains largely uniform across scales. The theoretically expected fast error growth characteristic of a k(-5/3) spectrum is seen to be largely suppressed in the first decade of the mesoscale range by the synoptic-scale k(-3) range. However, it emerges once models become fully able to resolve features on something like a 20-km scale, which corresponds to a grid resolution on the order of a few kilometers.
We develop a hydrostatic Hamiltonian particle-mesh (HPM) method for efficient long-term numerical integration of the atmosphere. In the HPM method, the hydrostatic approximation is interpreted as a holonomic constraint for the vertical position of particles. This can be viewed as defining a set of vertically buoyant horizontal meshes, with the altitude of each mesh point determined so as to satisfy the hydrostatic balance condition and with particles modelling horizontal advection between the moving meshes. We implement the method in a vertical-slice model and evaluate its performance for the simulation of idealized linear and nonlinear orographic flow in both dry and moist environments. The HPM method is able to capture the basic features of the gravity wave to a degree of accuracy comparable with that reported in the literature. The numerical solution in the moist experiment indicates that the influence of moisture on wave characteristics is represented reasonably well and the reduction of momentum flux is in good agreement with theoretical analysis.
Forecast verification
(2021)
The philosophy of forecast verification is rather different between deterministic and probabilistic verification metrics: generally speaking, deterministic metrics measure differences, whereas probabilistic metrics assess reliability and sharpness of predictive distributions. This article considers the root-mean-square error (RMSE), which can be seen as a deterministic metric, and the probabilistic metric Continuous Ranked Probability Score (CRPS), and demonstrates that under certain conditions, the CRPS can be mathematically expressed in terms of the RMSE when these metrics are aggregated. One of the required conditions is the normality of distributions. The other condition is that, while the forecast ensemble need not be calibrated, any bias or over/underdispersion cannot depend on the forecast distribution itself. Under these conditions, the CRPS is a fraction of the RMSE, and this fraction depends only on the heteroscedasticity of the ensemble spread and the measures of calibration. The derived CRPS-RMSE relationship for the case of perfect ensemble reliability is tested on simulations of idealised two-dimensional barotropic turbulence. Results suggest that the relationship holds approximately despite the normality condition not being met.
Bayesian inference can be embedded into an appropriately defined dynamics in the space of probability measures. In this paper, we take Brownian motion and its associated Fokker-Planck equation as a starting point for such embeddings and explore several interacting particle approximations. More specifically, we consider both deterministic and stochastic interacting particle systems and combine them with the idea of preconditioning by the empirical covariance matrix. In addition to leading to affine invariant formulations which asymptotically speed up convergence, preconditioning allows for gradient-free implementations in the spirit of the ensemble Kalman filter. While such gradient-free implementations have been demonstrated to work well for posterior measures that are nearly Gaussian, we extend their scope of applicability to multimodal measures by introducing localized gradient-free approximations. Numerical results demonstrate the effectiveness of the considered methodologies.
We evaluate the Hamiltonian particle methods (HPM) and the Nambu discretization applied to shallow-water equations on the sphere using the test suggested by Galewsky et al. (2004). Both simulations show excellent conservation of energy and are stable in long-term simulation. We repeat the test also using the ICOSWP scheme to compare with the two conservative spatial discretization schemes. The HPM simulation captures the main features of the reference solution, but wave 5 pattern is dominant in the simulations applied on the ICON grid with relatively low spatial resolutions. Nevertheless, agreement in statistics between the three schemes indicates their qualitatively similar behaviors in the long-term integration.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
The novel space-borne Global Navigation Satellite System Reflectometry (GNSS-R) technique has recently shown promise in monitoring the ocean state and surface wind speed with high spatial coverage and unprecedented sampling rate. The L-band signals of GNSS are structurally able to provide a higher quality of observations from areas covered by dense clouds and under intense precipitation, compared to those signals at higher frequencies from conventional ocean scatterometers. As a result, studying the inner core of cyclones and improvement of severe weather forecasting and cyclone tracking have turned into the main objectives of GNSS-R satellite missions such as Cyclone Global Navigation Satellite System (CYGNSS). Nevertheless, the rain attenuation impact on GNSS-R wind speed products is not yet well documented. Evaluating the rain attenuation effects on this technique is significant since a small change in the GNSS-R can potentially cause a considerable bias in the resultant wind products at intense wind speeds. Based on both empirical evidence and theory, wind speed is inversely proportional to derived bistatic radar cross section with a natural logarithmic relation, which introduces high condition numbers (similar to ill-posed conditions) at the inversions to high wind speeds. This paper presents an evaluation of the rain signal attenuation impact on the bistatic radar cross section and the derived wind speed. This study is conducted simulating GNSS-R delay-Doppler maps at different rain rates and reflection geometries, considering that an empirical data analysis at extreme wind intensities and rain rates is impossible due to the insufficient number of observations from these severe conditions. Finally, the study demonstrates that at a wind speed of 30 m/s and incidence angle of 30 degrees, rain at rates of 10, 15, and 20 mm/h might cause overestimation as large as approximate to 0.65 m/s (2%), 1.00 m/s (3%), and 1.3 m/s (4%), respectively, which are still smaller than the CYGNSS required uncertainty threshold. The simulations are conducted in a pessimistic condition (severe continuous rainfall below the freezing height and over the entire glistening zone) and the bias is expected to be smaller in size in real environments.
The generalized hybrid Monte Carlo (GHMC) method combines Metropolis corrected constant energy simulations with a partial random refreshment step in the particle momenta. The standard detailed balance condition requires that momenta are negated upon rejection of a molecular dynamics proposal step. The implication is a trajectory reversal upon rejection, which is undesirable when interpreting GHMC as thermostated molecular dynamics. We show that a modified detailed balance condition can be used to implement GHMC without momentum flips. The same modification can be applied to the generalized shadow hybrid Monte Carlo (GSHMC) method. Numerical results indicate that GHMC/GSHMC implementations with momentum flip display a favorable behavior in terms of sampling efficiency, i.e., the traditional GHMC/GSHMC implementations with momentum flip got the advantage of a higher acceptance rate and faster decorrelation of Monte Carlo samples. The difference is more pronounced for GHMC. We also numerically investigate the behavior of the GHMC method as a Langevin-type thermostat. We find that the GHMC method without momentum flip interferes less with the underlying stochastic molecular dynamics in terms of autocorrelation functions and it to be preferred over the GHMC method with momentum flip. The same finding applies to GSHMC.