Refine
Has Fulltext
- no (35)
Year of publication
Document Type
- Article (35) (remove)
Language
- English (35) (remove)
Is part of the Bibliography
- yes (35)
Keywords
- 3D CAVE (1)
- Algorithm (1)
- Artificial neural networks (1)
- Band (1)
- Calibration (1)
- Deep learning (1)
- Extreme discharge data (1)
- Extreme event (1)
- Flash flood analysis (1)
- Flood forecasting (1)
- Forensic disaster analysis (1)
- Hydrological modelling (1)
- Kwajalein (1)
- Methodology (1)
- Nordic catchments (1)
- Radar rainfall data (1)
- Rainfall-runoff (1)
- Reflectivity (1)
- Remote sensing (1)
- Soil moisture (1)
- Soil moisture measurement comparison (1)
- TELEMAC-2D model (1)
- Uncertainties (1)
- Urban pluvial flood susceptibility (1)
- Weather (1)
- Weather radar (1)
- attenuation (1)
- calibration (1)
- convolutional neural network (1)
- deep (1)
- differential split-sample test (1)
- digital elevation model (DEM) (1)
- fill–spill–merge method (1)
- flood generating processes (1)
- flood predictors (1)
- flood risk (1)
- flood seasonality (1)
- flood-prone area (1)
- heavy rainfall (1)
- higher education (1)
- hydrological modelling (1)
- immersive 3D geovisualization (1)
- inuosity (1)
- learning (1)
- learning success (1)
- machine (1)
- modelling (1)
- models (1)
- quantification (1)
- quantitative precipitation estimation (1)
- random forest (1)
- skill (1)
- spatial resolution; (1)
- student survey (1)
- support vector machine (1)
- system (1)
- topographic wetness index (TWI) (1)
- urban pluvial flooding (1)
- validation (1)
- water-balance (1)
- weather (1)
Institute
From 6 to 9 August 2012, intense rainfall hit the northern Philippines, causing massive floods in Metropolitan Manila and nearby regions. Local rain gauges recorded almost 1000mm within this period. However, the recently installed Philippine network of weather radars suggests that Metropolitan Manila might have escaped a potentially bigger flood just by a whisker, since the centre of mass of accumulated rainfall was located over Manila Bay. A shift of this centre by no more than 20 km could have resulted in a flood disaster far worse than what occurred during Typhoon Ketsana in September 2009.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage. The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Two lines of research are combined in this study: first, the development of tools for the temporal disaggregation of precipitation, and second, some newer results on the exponential scaling of heavy short-term precipitation with temperature, roughly following the Clausius-Clapeyron (CC) relation. Having no extra temperature dependence, the traditional disaggregation schemes are shown to lack the crucial CC-type temperature dependence. The authors introduce a proof-of-concept adjustment of an existing disaggregation tool, the multiplicative cascade model of Olsson, and show that, in principal, it is possible to include temperature dependence in the disaggregation step, resulting in a fairly realistic temperature dependence of the CC type. They conclude by outlining the main calibration steps necessary to develop a full-fledged CC disaggregation scheme and discuss possible applications.
Weather radar analysis has become increasingly sophisticated over the past 50 years, and efforts to keep software up to date have generally lagged behind the needs of the users. We argue that progress has been impeded by the fact that software has not been developed and shared as a community.
Recently, the situation has been changing. In this paper, the developers of a number of open-source software (OSS) projects highlight the potential of OSS to advance radar-related research. We argue that the community-based development of OSS holds the potential to reduce duplication of efforts and to create transparency in implemented algorithms while improving the quality and scope of the software. We also conclude that there is sufficiently mature technology to support collaboration across different software projects. This could allow for consolidation toward a set of interoperable software platforms, each designed to accommodate very specific user requirements.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
The potential of weather radar observations for hydrological and meteorological research and applications is undisputed, particularly with increasing world-wide radar coverage. However, several barriers impede the use of weather radar data. These barriers are of both scientific and technical nature. The former refers to inherent measurement errors and artefacts, the latter to aspects such as reading specific data formats, geo-referencing, visualisation. The radar processing library wradlib is intended to lower these barriers by providing a free and open source tool for the most important steps in processing weather radar data for hydro-meteorological and hydrological applications. Moreover, the community-based development approach of wradlib allows scientists to share their knowledge about efficient processing algorithms and to make this knowledge available to the weather radar community in a transparent, structured and well-documented way.
Cosmic-ray neutron sensing (CRNS) is a powerful technique for retrieving representative estimates of soil water content at a horizontal scale of hectometres (the “field scale”) and depths of tens of centimetres (“the root zone”). This study demonstrates the potential of the CRNS technique to obtain spatio-temporal patterns of soil moisture beyond the integrated volume from isolated CRNS footprints. We use data from an observational campaign carried out between May and July 2019 that featured a dense network of more than 20 neutron detectors with partly overlapping footprints in an area that exhibits pronounced soil moisture gradients within one square kilometre. The present study is the first to combine these observations in order to represent the heterogeneity of soil water content at the sub-footprint scale as well as between the CRNS stations. First, we apply a state-of-the-art procedure to correct the observed neutron count rates for static effects (heterogeneity in space, e.g. soil organic matter) and dynamic effects (heterogeneity in time, e.g. barometric pressure). Based on the homogenized neutron data, we investigate the robustness of a calibration approach that uses a single calibration parameter across all CRNS stations. Finally, we benchmark two different interpolation techniques for obtaining spatio-temporal representations of soil moisture: first, ordinary Kriging with a fixed range; second, spatial interpolation complemented by geophysical inversion (“constrained interpolation”). To that end, we optimize the parameters of a geostatistical interpolation model so that the error in the forward-simulated neutron count rates is minimized, and suggest a heuristic forward operator to make the optimization problem computationally feasible. Comparison with independent measurements from a cluster of soil moisture sensors (SoilNet) shows that the constrained interpolation approach is superior for representing horizontal soil moisture gradients at the hectometre scale. The study demonstrates how a CRNS network can be used to generate coherent, consistent, and continuous soil moisture patterns that could be used to validate hydrological models or remote sensing products.