Refine
Year of publication
Document Type
- Article (36)
- Postprint (16)
- Other (2)
- Part of a Book (1)
- Habilitation Thesis (1)
Is part of the Bibliography
- yes (56) (remove)
Keywords
- Algorithm (2)
- Band (2)
- Kwajalein (2)
- Methodology (2)
- Nordic catchments (2)
- Reflectivity (2)
- TELEMAC-2D model (2)
- Uncertainties (2)
- Urban pluvial flood susceptibility (2)
- Weather (2)
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate
projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of −5 to −17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of -5 to -17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
The phi(ev) is calculated from high-resolution discharge and precipitation data for several rain events with a cumulative precipitation P-cum ranging from less than 5mm to more than 80 mm. Because of the high uncertainty of phi(ev) associated with the hydrograph separation method, phi(ev) is calculated with several methods, including graphical methods, digital filters and a tracer-based method. The results indicate that the hydrological response depends on (theta) over bar (ini): during dry conditions phi(ev) is consistently below 0.1, even for events with high and intense precipitation. Above a threshold of (theta) over bar (ini) = 34 vol % phi(ev) can reach values up to 0.99 but there is a high scatter. Some variability can be explained with a weak correlation of phi(ev) with P-cum and rain intensity, but a considerable part of the variability remains unexplained. It is concluded that threshold-based methods can be helpful to prevent overestimation of the hydrological response during dry catchment conditions. The impact of soil moisture on the hydrological response during wet catchment conditions, however, is still insufficiently understood and cannot be generalized based on the present results.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
An Overview of Using Weather Radar for Climatological Studies: Successes, Challenges, and Potential
(2019)
Weather radars have been widely used to detect and quantify precipitation and nowcast severe weather for more than 50 years. Operational weather radars generate huge three-dimensional datasets that can accumulate to terabytes per day. So it is essential to review what can be done with existing vast amounts of data, and how we should manage the present datasets for the future climatologists. All weather radars provide the reflectivity factor, and this is the main parameter to be archived. Saving reflectivity as volumetric data in the original spherical coordinates allows for studies of the three-dimensional structure of precipitation, which can be applied to understand a number of processes, for example, analyzing hail or thunderstorm modes. Doppler velocity and polarimetric moments also have numerous applications for climate studies, for example, quality improvement of reflectivity and rain rate retrievals, and for interrogating microphysical and dynamical processes. However, observational data alone are not useful if they are not accompanied by sufficient metadata. Since the lifetime of a radar ranges between 10 and 20 years, instruments are typically replaced or upgraded during climatologically relevant time periods. As a result, present metadata often do not apply to past data. This paper outlines the work of the Radar Task Team set by the Atmospheric Observation Panel for Climate (AOPC) and summarizes results from a recent survey on the existence and availability of long time series. We also provide recommendations for archiving current and future data and examples of climatological studies in which radar data have already been used.
In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students reveal benefits, such as better orientation in the study area, higher interactivity with the data, improved discourse among students and enhanced motivation through immersive 3D geovisualization. This suggests that immersive 3D visualization can effectively be used in higher education and that 3D CAVE settings enhance interactive learning between students.
Storm runoff from the Marikina River Basin frequently causes flood events in the Philippine capital region Metro Manila. This paper presents and evaluates a system to predict short-term runoff from the upper part of that basin (380km(2)). It was designed as a possible component of an operational warning system yet to be installed. For the purpose of forecast verification, hindcasts of streamflow were generated for a period of 15 months with a time-continuous, conceptual hydrological model. The latter was fed with real-time observations of rainfall. Both ground observations and weather radar data were tested as rainfall forcings. The radar-based precipitation estimates clearly outperformed the raingauge-based estimates in the hydrological verification. Nevertheless, the quality of the deterministic short-term runoff forecasts was found to be limited. For the radar-based predictions, the reduction of variance for lead times of 1, 2 and 3hours was 0.61, 0.62 and 0.54, respectively, with reference to a no-forecast scenario, i.e. persistence. The probability of detection for major increases in streamflow was typically less than 0.5. Given the significance of flood events in the Marikina Basin, more effort needs to be put into the reduction of forecast errors and the quantification of remaining uncertainties.
Rainfall-induced attenuation is a major source of underestimation for radar-based precipitation estimation at C-band. Unconstrained gate-by-gate correction procedures are known to be inherently unstable and thus not suited for unsupervised attenuation correction. In this study, we evaluate three different procedures to constrain gate-by-gate attenuation correction using reflectivity as the only input. These procedures are benchmarked against rainfall estimates from uncorrected radar data, using six years of radar observations from the single-polarized C-band radar in South-West Germany. The precipitation estimation error is obtained by comparing the radar-based estimates to rain gauge observations. All attenuation correction procedures benchmarked in this study lead to an effective improvement of precipitation estimation. The first method caps the corrections if the rain intensity increase exceeds a factor of two. The second method decreases the parameters of the attenuation correction iteratively for every radar beam calculation until attaining a stability criterion. The second method outperforms the first method and leads to a consistent distribution of path-integrated attenuation along the radar beam. As a third method, we propose a slight modification of Kraemer's approach which allows users to exert better control over attenuation correction by introducing an additional constraint that prevents unplausible corrections in cases of dramatic signal losses.
Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness-of-fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest-the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real-world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach.