Refine
Year of publication
Document Type
- Article (46)
- Postprint (14)
- Monograph/Edited Volume (1)
- Review (1)
Keywords
- ensemble Kalman filter (6)
- data assimilation (5)
- Bayesian inference (4)
- GNSS Reflectometry (3)
- wind speed (3)
- DDM simulation (2)
- Data assimilation (2)
- Ensemble Kalman filter (2)
- continuous-time data assimilation (2)
- correlated noise (2)
Institute
TDS-1 GNSS Reflectometry
(2018)
This study presents the development and a systematic evaluation study of GNSS reflectometry wind speeds. After establishing a wind speed retrieval algorithm, UK TechDemoSat-1 (TDS-1) derived winds, from May 2015 to July 2017, are compared to the Advanced Scatterometer (ASCAT). ERA-Interim wind fields of the European Centre for Medium-range Weather Forecasts (ECMWF) and in situ observation from Tropical Atmosphere Ocean buoy array in the Pacific are taken as reference. One-year averaged TDS-1 global winds demonstrate small differences with ECMWF in a majority of areas as well as discuss under- and overestimations. The pioneering TDS-1 winds demonstrate a root-mean-squared error (RMSE) and bias of 2.77 and -0.33 m/s, which are comparable to the RMSE and bias derived by ASCAT winds, as large as 2.31 and 0.25 m/s, respectively. Using buoys measurements as reference, RMSE and bias of 2.23 and -0.03 m/s for TDS-1 as well as 1.40 and -0.68 m/s for ASCAT are obtained. Utilizing rain microwave-infrared estimates of the Tropical Rainfall Measuring Mission, rain-affected observation of both ASCAT and TDS-1 are collected and evaluated. Although ASCAT winds show a significant performance degradation resulting in an RMSE and bias of 3.16 and 1.03 m/s, respectively, during rain condition, TDS-1 shows a more reliable performance with an RMSE and bias of 2.94 and -0.21 m/s, respectively, which indicates the promising capability of GNSS forward scattering for wind retrievals during rain. A decrease in TDS-1-derived bistatic radar cross sections during rain events, at weak winds, is also demonstrated.
For the first time, a rain signature in Global Navigation Satellite System Reflectometry (GNSS-R) observations is demonstrated. Based on the argument that the forward quasi-specular scattering relies upon surface gravity waves with lengths larger than several wavelengths of the reflected signal, a commonly made conclusion is that the scatterometric GNSS-R measurements are not sensitive to the surface small-scale roughness generated by raindrops impinging on the ocean surface. On the contrary, this study presents an evidence that the bistatic radar cross section sigma(0) derived from TechDemoSat-1 data is reduced due to rain at weak winds, lower than approximate to 6 m/s. The decrease is as large as approximate to 0.7 dB at the wind speed of 3 m/s due to a precipitation of 0-2 mm/hr. The simulations based on the recently published scattering theory provide a plausible explanation for this phenomenon which potentially enables the GNSS-R technique to detect precipitation over oceans at low winds.
The accepted idea that there exists an inherent finite-time barrier in deterministically predicting atmospheric flows originates from Edward N. Lorenz’s 1969 work based on two-dimensional (2D) turbulence. Yet, known analytic results on the 2D Navier–Stokes (N-S) equations suggest that one can skillfully predict the 2D N-S system indefinitely far ahead should the initial-condition error become sufficiently small, thereby presenting a potential conflict with Lorenz’s theory. Aided by numerical simulations, the present work reexamines Lorenz’s model and reviews both sides of the argument, paying particular attention to the roles played by the slope of the kinetic energy spectrum. It is found that when this slope is shallower than −3, the Lipschitz continuity of analytic solutions (with respect to initial conditions) breaks down as the model resolution increases, unless the viscous range of the real system is resolved—which remains practically impossible. This breakdown leads to the inherent finite-time limit. If, on the other hand, the spectral slope is steeper than −3, then the breakdown does not occur. In this way, the apparent contradiction between the analytic results and Lorenz’s theory is reconciled.
A time-staggered semi-Lagrangian discretization of the rotating shallow-water equations is proposed and analysed. Application of regularization to the geopotential field used in the momentum equations leads to an unconditionally stable scheme. The analysis, together with a fully nonlinear example application, suggests that this approach is a promising, efficient, and accurate alternative to traditional schemes.
The efficient time integration of the dynamic core equations for numerical weather prediction (NWP) remains a key challenge. One of the most popular methods is currently provided by implementations of the semi-implicit semi-Lagrangian (SISL) method, originally proposed by Robert (J. Meteorol. Soc. Jpn., 1982). Practical implementations of the SISL method are, however, not without certain shortcomings with regard to accuracy, conservation properties and stability. Based on recent work by Gottwald, Frank and Reich (LNCSE, Springer, 2002), Frank, Reich, Staniforth, White and Wood (Atm. Sci. Lett., 2005) and Wood, Staniforth and Reich (Atm. Sci. Lett., 2006) we propose an alternative semi-Lagrangian implementation based on a set of regularized equations and the popular Stormer-Verlet time stepping method in the context of the shallow-water equations (SWEs). Ultimately, the goal is to develop practical implementations for the 3D Euler equations that overcome some or all shortcomings of current SISL implementations.
Classic inversion methods adjust a model with a predefined number of parameters to the observed data. With transdimensional inversion algorithms such as the reversible-jump Markov chain Monte Carlo (rjMCMC), it is possible to vary this number during the inversion and to interpret the observations in a more flexible way. Geoscience imaging applications use this behaviour to automatically adjust model resolution to the inhomogeneities of the investigated system, while keeping the model parameters on an optimal level. The rjMCMC algorithm produces an ensemble as result, a set of model realizations, which together represent the posterior probability distribution of the investigated problem. The realizations are evolved via sequential updates from a randomly chosen initial solution and converge toward the target posterior distribution of the inverse problem. Up to a point in the chain, the realizations may be strongly biased by the initial model, and must be discarded from the final ensemble. With convergence assessment techniques, this point in the chain can be identified. Transdimensional MCMC methods produce ensembles that are not suitable for classic convergence assessment techniques because of the changes in parameter numbers. To overcome this hurdle, three solutions are introduced to convert model realizations to a common dimensionality while maintaining the statistical characteristics of the ensemble. A scalar, a vector and a matrix representation for models is presented, inferred from tomographic subsurface investigations, and three classic convergence assessment techniques are applied on them. It is shown that appropriately chosen scalar conversions of the models could retain similar statistical ensemble properties as geologic projections created by rasterization.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
Several numerical tools designed to overcome the challenges of smoothing in a non-linear and non-Gaussian setting are investigated for a class of particle smoothers. The considered family of smoothers is induced by the class of linear ensemble transform filters which contains classical filters such as the stochastic ensemble Kalman filter, the ensemble square root filter, and the recently introduced nonlinear ensemble transform filter. Further the ensemble transform particle smoother is introduced and particularly highlighted as it is consistent in the particle limit and does not require assumptions with respect to the family of the posterior distribution. The linear update pattern of the considered class of linear ensemble transform smoothers allows one to implement important supplementary techniques such as adaptive spread corrections, hybrid formulations, and localization in order to facilitate their application to complex estimation problems. These additional features are derived and numerically investigated for a sequence of increasingly challenging test problems.
This paper is concerned with the filtering problem in continuous time. Three algorithmic solution approaches for this problem are reviewed: (i) the classical Kalman-Bucy filter, which provides an exact solution for the linear Gaussian problem; (ii) the ensemble Kalman-Bucy filter (EnKBF), which is an approximate filter and represents an extension of the Kalman-Bucy filter to nonlinear problems; and (iii) the feedback particle filter (FPF), which represents an extension of the EnKBF and furthermore provides for a consistent solution in the general nonlinear, non-Gaussian case. The common feature of the three algorithms is the gain times error formula to implement the update step (to account for conditioning due to the observations) in the filter. In contrast to the commonly used sequential Monte Carlo methods, the EnKBF and FPF avoid the resampling of the particles in the importance sampling update step. Moreover, the feedback control structure provides for error correction potentially leading to smaller simulation variance and improved stability properties. The paper also discusses the issue of nonuniqueness of the filter update formula and formulates a novel approximation algorithm based on ideas from optimal transport and coupling of measures. Performance of this and other algorithms is illustrated for a numerical example.