Refine
Year of publication
Document Type
- Article (39)
- Postprint (16)
- Other (2)
- Part of a Book (1)
- Habilitation Thesis (1)
Is part of the Bibliography
- yes (59)
Keywords
- Algorithm (2)
- Band (2)
- Kwajalein (2)
- Methodology (2)
- Nordic catchments (2)
- Reflectivity (2)
- TELEMAC-2D model (2)
- Uncertainties (2)
- Urban pluvial flood susceptibility (2)
- Weather (2)
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate
projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of -5 to -17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
This paper investigates the transferability of calibrated HBV model parameters under stable and contrasting conditions in terms of flood seasonality and flood generating processes (FGP) in five Norwegian catchments with mixed snowmelt/rainfall regimes. We apply a series of generalized (differential) split-sample tests using a 6-year moving window over (i) the entire runoff observation periods, and (ii) two subsets of runoff observations distinguished by the seasonal occurrence of annual maximum floods during either spring or autumn. The results indicate a general model performance loss due to the transfer of calibrated parameters to independent validation periods of −5 to −17%, on average. However, there is no indication that contrasting flood seasonality exacerbates performance losses, which contradicts the assumption that optimized parameter sets for snowmelt-dominated floods (during spring) perform particularly poorly on validation periods with rainfall-dominated floods (during autumn) and vice versa.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
The phi(ev) is calculated from high-resolution discharge and precipitation data for several rain events with a cumulative precipitation P-cum ranging from less than 5mm to more than 80 mm. Because of the high uncertainty of phi(ev) associated with the hydrograph separation method, phi(ev) is calculated with several methods, including graphical methods, digital filters and a tracer-based method. The results indicate that the hydrological response depends on (theta) over bar (ini): during dry conditions phi(ev) is consistently below 0.1, even for events with high and intense precipitation. Above a threshold of (theta) over bar (ini) = 34 vol % phi(ev) can reach values up to 0.99 but there is a high scatter. Some variability can be explained with a weak correlation of phi(ev) with P-cum and rain intensity, but a considerable part of the variability remains unexplained. It is concluded that threshold-based methods can be helpful to prevent overestimation of the hydrological response during dry catchment conditions. The impact of soil moisture on the hydrological response during wet catchment conditions, however, is still insufficiently understood and cannot be generalized based on the present results.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
An Overview of Using Weather Radar for Climatological Studies: Successes, Challenges, and Potential
(2019)
Weather radars have been widely used to detect and quantify precipitation and nowcast severe weather for more than 50 years. Operational weather radars generate huge three-dimensional datasets that can accumulate to terabytes per day. So it is essential to review what can be done with existing vast amounts of data, and how we should manage the present datasets for the future climatologists. All weather radars provide the reflectivity factor, and this is the main parameter to be archived. Saving reflectivity as volumetric data in the original spherical coordinates allows for studies of the three-dimensional structure of precipitation, which can be applied to understand a number of processes, for example, analyzing hail or thunderstorm modes. Doppler velocity and polarimetric moments also have numerous applications for climate studies, for example, quality improvement of reflectivity and rain rate retrievals, and for interrogating microphysical and dynamical processes. However, observational data alone are not useful if they are not accompanied by sufficient metadata. Since the lifetime of a radar ranges between 10 and 20 years, instruments are typically replaced or upgraded during climatologically relevant time periods. As a result, present metadata often do not apply to past data. This paper outlines the work of the Radar Task Team set by the Atmospheric Observation Panel for Climate (AOPC) and summarizes results from a recent survey on the existence and availability of long time series. We also provide recommendations for archiving current and future data and examples of climatological studies in which radar data have already been used.
In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students reveal benefits, such as better orientation in the study area, higher interactivity with the data, improved discourse among students and enhanced motivation through immersive 3D geovisualization. This suggests that immersive 3D visualization can effectively be used in higher education and that 3D CAVE settings enhance interactive learning between students.
Storm runoff from the Marikina River Basin frequently causes flood events in the Philippine capital region Metro Manila. This paper presents and evaluates a system to predict short-term runoff from the upper part of that basin (380km(2)). It was designed as a possible component of an operational warning system yet to be installed. For the purpose of forecast verification, hindcasts of streamflow were generated for a period of 15 months with a time-continuous, conceptual hydrological model. The latter was fed with real-time observations of rainfall. Both ground observations and weather radar data were tested as rainfall forcings. The radar-based precipitation estimates clearly outperformed the raingauge-based estimates in the hydrological verification. Nevertheless, the quality of the deterministic short-term runoff forecasts was found to be limited. For the radar-based predictions, the reduction of variance for lead times of 1, 2 and 3hours was 0.61, 0.62 and 0.54, respectively, with reference to a no-forecast scenario, i.e. persistence. The probability of detection for major increases in streamflow was typically less than 0.5. Given the significance of flood events in the Marikina Basin, more effort needs to be put into the reduction of forecast errors and the quantification of remaining uncertainties.
Rainfall-induced attenuation is a major source of underestimation for radar-based precipitation estimation at C-band. Unconstrained gate-by-gate correction procedures are known to be inherently unstable and thus not suited for unsupervised attenuation correction. In this study, we evaluate three different procedures to constrain gate-by-gate attenuation correction using reflectivity as the only input. These procedures are benchmarked against rainfall estimates from uncorrected radar data, using six years of radar observations from the single-polarized C-band radar in South-West Germany. The precipitation estimation error is obtained by comparing the radar-based estimates to rain gauge observations. All attenuation correction procedures benchmarked in this study lead to an effective improvement of precipitation estimation. The first method caps the corrections if the rain intensity increase exceeds a factor of two. The second method decreases the parameters of the attenuation correction iteratively for every radar beam calculation until attaining a stability criterion. The second method outperforms the first method and leads to a consistent distribution of path-integrated attenuation along the radar beam. As a third method, we propose a slight modification of Kraemer's approach which allows users to exert better control over attenuation correction by introducing an additional constraint that prevents unplausible corrections in cases of dramatic signal losses.
Hydrologic modelers often need to know which method of quantitative precipitation estimation (QPE) is best suited for a particular catchment. Traditionally, QPE methods are verified and benchmarked against independent rain gauge observations. However, the lack of spatial representativeness limits the value of such a procedure. Alternatively, one could drive a hydrological model with different QPE products and choose the one which best reproduces observed runoff. Unfortunately, the calibration of conceptual model parameters might conceal actual differences between the QPEs. To avoid such effects, we abandoned the idea of determining optimum parameter sets for all QPE being compared. Instead, we carry out a large number of runoff simulations, confronting each QPE with a common set of random parameters. By evaluating the goodness-of-fit of all simulations, we obtain information on whether the quality of competing QPE methods is significantly different. This knowledge is inferred exactly at the scale of interest-the catchment scale. We use synthetic data to investigate the ability of this procedure to distinguish a truly superior QPE from an inferior one. We find that the procedure is prone to failure in the case of linear systems. However, we show evidence that in realistic (nonlinear) settings, the method can provide useful results even in the presence of moderate errors in model structure and streamflow observations. In a real-world case study on a small mountainous catchment, we demonstrate the ability of the verification procedure to reveal additional insights as compared to a conventional cross validation approach.
The potential of weather radar observations for hydrological and meteorological research and applications is undisputed, particularly with increasing world-wide radar coverage. However, several barriers impede the use of weather radar data. These barriers are of both scientific and technical nature. The former refers to inherent measurement errors and artefacts, the latter to aspects such as reading specific data formats, geo-referencing, visualisation. The radar processing library wradlib is intended to lower these barriers by providing a free and open source tool for the most important steps in processing weather radar data for hydro-meteorological and hydrological applications. Moreover, the community-based development approach of wradlib allows scientists to share their knowledge about efficient processing algorithms and to make this knowledge available to the weather radar community in a transparent, structured and well-documented way.
Cosmic-ray neutron sensing (CRNS) is a powerful technique for retrieving representative estimates of soil water content at a horizontal scale of hectometres (the “field scale”) and depths of tens of centimetres (“the root zone”). This study demonstrates the potential of the CRNS technique to obtain spatio-temporal patterns of soil moisture beyond the integrated volume from isolated CRNS footprints. We use data from an observational campaign carried out between May and July 2019 that featured a dense network of more than 20 neutron detectors with partly overlapping footprints in an area that exhibits pronounced soil moisture gradients within one square kilometre. The present study is the first to combine these observations in order to represent the heterogeneity of soil water content at the sub-footprint scale as well as between the CRNS stations. First, we apply a state-of-the-art procedure to correct the observed neutron count rates for static effects (heterogeneity in space, e.g. soil organic matter) and dynamic effects (heterogeneity in time, e.g. barometric pressure). Based on the homogenized neutron data, we investigate the robustness of a calibration approach that uses a single calibration parameter across all CRNS stations. Finally, we benchmark two different interpolation techniques for obtaining spatio-temporal representations of soil moisture: first, ordinary Kriging with a fixed range; second, spatial interpolation complemented by geophysical inversion (“constrained interpolation”). To that end, we optimize the parameters of a geostatistical interpolation model so that the error in the forward-simulated neutron count rates is minimized, and suggest a heuristic forward operator to make the optimization problem computationally feasible. Comparison with independent measurements from a cluster of soil moisture sensors (SoilNet) shows that the constrained interpolation approach is superior for representing horizontal soil moisture gradients at the hectometre scale. The study demonstrates how a CRNS network can be used to generate coherent, consistent, and continuous soil moisture patterns that could be used to validate hydrological models or remote sensing products.
Cosmic-ray neutron sensing (CRNS) is a powerful technique for retrieving representative estimates of soil water content at a horizontal scale of hectometres (the “field scale”) and depths of tens of centimetres (“the root zone”). This study demonstrates the potential of the CRNS technique to obtain spatio-temporal patterns of soil moisture beyond the integrated volume from isolated CRNS footprints. We use data from an observational campaign carried out between May and July 2019 that featured a dense network of more than 20 neutron detectors with partly overlapping footprints in an area that exhibits pronounced soil moisture gradients within one square kilometre. The present study is the first to combine these observations in order to represent the heterogeneity of soil water content at the sub-footprint scale as well as between the CRNS stations. First, we apply a state-of-the-art procedure to correct the observed neutron count rates for static effects (heterogeneity in space, e.g. soil organic matter) and dynamic effects (heterogeneity in time, e.g. barometric pressure). Based on the homogenized neutron data, we investigate the robustness of a calibration approach that uses a single calibration parameter across all CRNS stations. Finally, we benchmark two different interpolation techniques for obtaining spatio-temporal representations of soil moisture: first, ordinary Kriging with a fixed range; second, spatial interpolation complemented by geophysical inversion (“constrained interpolation”). To that end, we optimize the parameters of a geostatistical interpolation model so that the error in the forward-simulated neutron count rates is minimized, and suggest a heuristic forward operator to make the optimization problem computationally feasible. Comparison with independent measurements from a cluster of soil moisture sensors (SoilNet) shows that the constrained interpolation approach is superior for representing horizontal soil moisture gradients at the hectometre scale. The study demonstrates how a CRNS network can be used to generate coherent, consistent, and continuous soil moisture patterns that could be used to validate hydrological models or remote sensing products.
Cosmic-ray neutron sensing (CRNS) allows for the estimation of root-zone soil water content (SWC) at the scale of several hectares. In this paper, we present the data recorded by a dense CRNS network operated from 2019 to 2022 at an agricultural research site in Marquardt, Germany - the first multi-year CRNS cluster. Consisting, at its core, of eight permanently installed CRNS sensors, the cluster was supplemented by a wealth of complementary measurements: data from seven additional temporary CRNS sensors, partly co-located with the permanent ones; 27 SWC profiles (mostly permanent); two groundwater observation wells; meteorological records; and Global Navigation Satellite System reflectometry (GNSS-R). Complementary to these continuous measurements, numerous campaign-based activities provided data by mobile CRNS roving, hyperspectral im-agery via UASs, intensive manual sampling of soil properties (SWC, bulk density, organic matter, texture, soil hydraulic properties), and observations of biomass and snow (cover, depth, and density). The unique temporal coverage of 3 years entails a broad spectrum of hydro-meteorological conditions, including exceptional drought periods and extreme rainfall but also episodes of snow coverage, as well as a dedicated irrigation experiment. Apart from serving to advance CRNS-related retrieval methods, this data set is expected to be useful for vari-ous disciplines, for example, soil and groundwater hydrology, agriculture, or remote sensing. Hence, we show exemplary features of the data set in order to highlight the potential for such subsequent studies. The data are available at doi.org/10.23728/b2share.551095325d74431881185fba1eb09c95 (Heistermann et al., 2022b).
In a study from 2008, Lariviere and colleagues showed, for the field of natural sciences and engineering, that the median age of cited references is increasing over time. This result was considered counterintuitive: with the advent of electronic search engines, online journal issues and open access publications, one could have expected that cited literature is becoming younger. That study has motivated us to take a closer look at the changes in the age distribution of references that have been cited in water resources journals since 1965. Not only could we confirm the findings of Lariviere and colleagues. We were also able to show that the aging is mainly happening in the oldest 10-25% of an average reference list. This is consistent with our analysis of top-cited papers in the field of water resources. Rankings based on total citations since 1965 consistently show the dominance of old literature, including text books and research papers in equal shares. For most top-cited old-timers, citations are still growing exponentially. There is strong evidence that most citations are attracted by publications that introduced methods which meanwhile belong to the standard toolset of researchers and practitioners in the field of water resources. Although we think that this trend should not be overinterpreted as a sign of stagnancy, there might be cause for concern regarding how authors select their references. We question the increasing citation of textbook knowledge as it holds the risk that reference lists become overcrowded, and that the readability of papers deteriorates.
From 6 to 9 August 2012, intense rainfall hit the northern Philippines, causing massive floods in Metropolitan Manila and nearby regions. Local rain gauges recorded almost 1000mm within this period. However, the recently installed Philippine network of weather radars suggests that Metropolitan Manila might have escaped a potentially bigger flood just by a whisker, since the centre of mass of accumulated rainfall was located over Manila Bay. A shift of this centre by no more than 20 km could have resulted in a flood disaster far worse than what occurred during Typhoon Ketsana in September 2009.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.
Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.
To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.
We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.
We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.
Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.
To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.
We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.
We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
Weather radar analysis has become increasingly sophisticated over the past 50 years, and efforts to keep software up to date have generally lagged behind the needs of the users. We argue that progress has been impeded by the fact that software has not been developed and shared as a community.
Recently, the situation has been changing. In this paper, the developers of a number of open-source software (OSS) projects highlight the potential of OSS to advance radar-related research. We argue that the community-based development of OSS holds the potential to reduce duplication of efforts and to create transparency in implemented algorithms while improving the quality and scope of the software. We also conclude that there is sufficiently mature technology to support collaboration across different software projects. This could allow for consolidation toward a set of interoperable software platforms, each designed to accommodate very specific user requirements.
Cosmic-ray neutron sensing (CRNS) has become an effective method to measure soil moisture at a horizontal scale of hundreds of metres and a depth of decimetres. Recent studies proposed operating CRNS in a network with overlapping footprints in order to cover root-zone water dynamics at the small catchment scale and, at the same time, to represent spatial heterogeneity. In a joint field campaign from September to November 2020 (JFC-2020), five German research institutions deployed 15 CRNS sensors in the 0.4 km2 Wüstebach catchment (Eifel mountains, Germany). The catchment is dominantly forested (but includes a substantial fraction of open vegetation) and features a topographically distinct catchment boundary. In addition to the dense CRNS coverage, the campaign featured a unique combination of additional instruments and techniques: hydro-gravimetry (to detect water storage dynamics also below the root zone); ground-based and, for the first time, airborne CRNS roving; an extensive wireless soil sensor network, supplemented by manual measurements; and six weighable lysimeters. Together with comprehensive data from the long-term local research infrastructure, the published data set (available at https://doi.org/10.23728/b2share.756ca0485800474e9dc7f5949c63b872; Heistermann et al., 2022) will be a valuable asset in various research contexts: to advance the retrieval of landscape water storage from CRNS, wireless soil sensor networks, or hydrogravimetry; to identify scale-specific combinations of sensors and methods to represent soil moisture variability; to improve the understanding and simulation of land–atmosphere exchange as well as hydrological and hydrogeological processes at the hillslope and the catchment scale; and to support the retrieval of soil water content from airborne and spaceborne remote sensing platforms.
Cosmic-ray neutron sensing (CRNS) has become an effective method to measure soil moisture at a horizontal scale of hundreds of metres and a depth of decimetres. Recent studies proposed operating CRNS in a network with overlapping footprints in order to cover root-zone water dynamics at the small catchment scale and, at the same time, to represent spatial heterogeneity. In a joint field campaign from September to November 2020 (JFC-2020), five German research institutions deployed 15 CRNS sensors in the 0.4 km2 Wüstebach catchment (Eifel mountains, Germany). The catchment is dominantly forested (but includes a substantial fraction of open vegetation) and features a topographically distinct catchment boundary. In addition to the dense CRNS coverage, the campaign featured a unique combination of additional instruments and techniques: hydro-gravimetry (to detect water storage dynamics also below the root zone); ground-based and, for the first time, airborne CRNS roving; an extensive wireless soil sensor network, supplemented by manual measurements; and six weighable lysimeters. Together with comprehensive data from the long-term local research infrastructure, the published data set (available at https://doi.org/10.23728/b2share.756ca0485800474e9dc7f5949c63b872; Heistermann et al., 2022) will be a valuable asset in various research contexts: to advance the retrieval of landscape water storage from CRNS, wireless soil sensor networks, or hydrogravimetry; to identify scale-specific combinations of sensors and methods to represent soil moisture variability; to improve the understanding and simulation of land–atmosphere exchange as well as hydrological and hydrogeological processes at the hillslope and the catchment scale; and to support the retrieval of soil water content from airborne and spaceborne remote sensing platforms.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger nonlinear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes. This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale. So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for waterrelated policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
Cosmic-ray neutron sensing (CRNS) is a non-invasive tool for measuring hydrogen pools such as soil moisture, snow or vegetation. The intrinsic integration over a radial hectare-scale footprint is a clear advantage for averaging out small-scale heterogeneity, but on the other hand the data may become hard to interpret in complex terrain with patchy land use.
This study presents a directional shielding approach to prevent neutrons from certain angles from being counted while counting neutrons entering the detector from other angles and explores its potential to gain a sharper horizontal view on the surrounding soil moisture distribution.
Using the Monte Carlo code URANOS (Ultra Rapid Neutron-Only Simulation), we modelled the effect of additional polyethylene shields on the horizontal field of view and assessed its impact on the epithermal count rate, propagated uncertainties and aggregation time.
The results demonstrate that directional CRNS measurements are strongly dominated by isotropic neutron transport, which dilutes the signal of the targeted direction especially from the far field. For typical count rates of customary CRNS stations, directional shielding of half-spaces could not lead to acceptable precision at a daily time resolution. However, the mere statistical distinction of two rates should be feasible.
Cosmic-ray neutron sensing (CRNS) is a non-invasive tool for measuring hydrogen pools such as soil moisture, snow or vegetation. The intrinsic integration over a radial hectare-scale footprint is a clear advantage for averaging out small-scale heterogeneity, but on the other hand the data may become hard to interpret in complex terrain with patchy land use.
This study presents a directional shielding approach to prevent neutrons from certain angles from being counted while counting neutrons entering the detector from other angles and explores its potential to gain a sharper horizontal view on the surrounding soil moisture distribution.
Using the Monte Carlo code URANOS (Ultra Rapid Neutron-Only Simulation), we modelled the effect of additional polyethylene shields on the horizontal field of view and assessed its impact on the epithermal count rate, propagated uncertainties and aggregation time.
The results demonstrate that directional CRNS measurements are strongly dominated by isotropic neutron transport, which dilutes the signal of the targeted direction especially from the far field. For typical count rates of customary CRNS stations, directional shielding of half-spaces could not lead to acceptable precision at a daily time resolution. However, the mere statistical distinction of two rates should be feasible.
Deriving soil moisture content (SMC) at the regional scale with different spatial and temporal land cover changes is still a challenge for active and passive remote sensing systems, often coped with machine learning methods.
So far, the reference measurements of the data-driven approaches are usually based on point data, which entails a scale gap to the resolution of the remote sensing data. Cosmic Ray Neutron Sensing (CRNS) indirectly provides SMC estimates of a soil volume covering more than 1 ha and vertical depth up to 80 cm and is thus able to narrow this scale gap.
So far, the CRNS-based SMC has only been used as validation source of remote sensing based SMC products. Its beneficial large sensing volume, especially in depth, has not been exploited yet.
However, the sensing volume of the CRNS, which is changing with hydrological conditions, bears challenges for the comparison with remote sensing observations. This study, for the fist time, aims to understand the direct linkage of optical (Sentinel 2) and SAR (Sentinel 1) data with CRNS-based SMC.
Thereby, the CRNS-based SMC is obtained by an experimental CRNS cluster that covers the high temporal and spatial SMC variability of an entire pre-alpine subcatchment. Using different Random Forest regressions, we analyze the potentials and limitations of both remote sensing sensors to follow the CRNS-based SMC signal.
Our results show that it is possible to link the CRNS-based SMC signal with SAR and optical remote sensing observations via Random Forest modelling.
We found that Sentinel 2 data is able to separate wet from dry periods with a R2 of 0.68.
It is less affected by the changing soil volume that contributes to the CRNS-based SMC signal and it is able to assign a land cover specific SMC distribution.
However, Sentinel 2 regression models are not accurate (R2 < 0.21) in mapping the CRNSbased SMC for the frequently mowed grassland areas of the study site. It requires soil type and topographical information to accurately follow the CRNS-based SMC signal with Random Forest regression.
Sentinel 1 data instead is affected by the changing soil volume that contributes to the CRNS-based SMC signal. It has reasonable model performance (R2 = 0.34) when the CRNS data correspond to surface SMC. Also for Sentinel 1 the retrieval is impacted by the mowing activities at the test site.
When separating the CRNS data set into dry and wet periods, soil properties and topography are the main drivers of SMC estimation. Sentinel 1 or Sentinel 2 data add the existing temporal variability to the regression models. The analysis underlines the need of combining optical and SAR observations (Sentinel 1, Sentinel 2) as well as soil property and topographical information to understand and follow the CRNS-based SMC signal for different hydrological conditions and land cover types.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage. The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
In precipitation nowcasting, it is common to track the motion of precipitation in a sequence of weather radar images and to extrapolate this motion into the future. The total error of such a prediction consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow isolating the extent of location errors, making it difficult to specifically improve nowcast models with regard to location prediction. In this paper, we introduce a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time Delta t ahead of the forecast time t corresponds to the Euclidean distance between the observed and the predicted feature locations at t + Delta t. Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the German Weather Service. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion from t - 1 to t (LK-Lin1) and t - 4 to t (LK-Lin4) and the other two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear (DIS-Lin1) and Semi-Lagrangian extrapolation (DIS-Rot1). Of those four models, DIS-Lin1 and LK-Lin4 turned out to be the most skillful with regard to the prediction of feature location, while we also found that the model skill dramatically depends on the sinuosity of the observed tracks. The dataset of 376,125 detected feature tracks in 2016 is openly available to foster the improvement of location prediction in extrapolation-based nowcasting models.
Two lines of research are combined in this study: first, the development of tools for the temporal disaggregation of precipitation, and second, some newer results on the exponential scaling of heavy short-term precipitation with temperature, roughly following the Clausius-Clapeyron (CC) relation. Having no extra temperature dependence, the traditional disaggregation schemes are shown to lack the crucial CC-type temperature dependence. The authors introduce a proof-of-concept adjustment of an existing disaggregation tool, the multiplicative cascade model of Olsson, and show that, in principal, it is possible to include temperature dependence in the disaggregation step, resulting in a fairly realistic temperature dependence of the CC type. They conclude by outlining the main calibration steps necessary to develop a full-fledged CC disaggregation scheme and discuss possible applications.
Our subject is a new catalogue of radar-based heavy rainfall events (CatRaRE) over Germany and how it relates to the concurrent atmospheric circulation. We classify daily ERA5 fields of convective indices according to CatRaRE, using an array of 13 statistical methods, consisting of 4 conventional (“shallow”) and 9 more recent deep machine learning (DL) algorithms; the classifiers are then applied to corresponding fields of
simulated present and future atmospheres from the Coordinated Regional Climate Downscaling Experiment (CORDEX) project. The inherent uncertainty of the DL results from the stochastic nature of their optimization is addressed by employing an ensemble approach using 20 runs for each network. The shallow random forest method performs best with an equitable threat score (ETS) around 0.52, followed by the DL networks ALL-CNN and ResNet with an ETS near 0.48. Their success can be understood as a result of conceptual simplicity and parametric parsimony, which obviously best fits the relatively simple classification task. It is found that, on summer days, CatRaRE convective atmospheres over Germany occur with a probability of about 0.5. This probability is projected to increase, regardless of method, both in ERA5-reanalyzed and CORDEX-simulated atmospheres: for the historical period we find a centennial increase of about 0.2 and for the future period one of slightly below 0.1.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
In recent years, urban and rural flash floods in Europe and abroad have gained considerable attention because of their sudden occurrence, severe material damages and even danger to life of inhabitants. This contribution addresses questions about possibly changing environmental conditions which might have altered the occurrence frequencies of such events and their consequences. We analyze the following major fields of environmental changes.
Altered high intensity rain storm conditions, as a consequence of regionalwarming; Possibly altered runoff generation conditions in response to high intensity rainfall events; Possibly altered runoff concentration conditions in response to the usage and management of the landscape, such as agricultural, forest practices or rural roads; Effects of engineering measures in the catchment, such as retention basins, check dams, culverts, or river and geomorphological engineering measures.
We take the flash-flood in Braunsbach, SW-Germany, as an example, where a particularly concise flash flood event occurred at the end of May 2016. This extreme cascading natural event led to immense damage in this particular village. The event is retrospectively analyzed with regard to meteorology, hydrology, geomorphology and damage to obtain a quantitative assessment of the processes and their development.
The results show that it was a very rare rainfall event with extreme intensities, which in combination with catchment properties and altered environmental conditions led to extreme runoff, extreme debris flow and immense damages. Due to the complex and interacting processes, no single flood cause can be identified, since only the interplay of those led to such an event. We have shown that environmental changes are important, but-at least for this case study-even natural weather and hydrologic conditions would still have resulted in an extreme flash flood event.
Flood generation in mountainous headwater catchments is governed by rainfall intensities, by the spatial distribution of rainfall and by the state of the catchment prior to the rainfall, e. g. by the spatial pattern of the soil moisture, groundwater conditions and possibly snow. The work presented here explores the limits and potentials of measuring soil moisture with different methods and in different scales and their potential use for flood simulation. These measurements were obtained in 2007 and 2008 within a comprehensive multi-scale experiment in the Weisseritz headwater catchment in the Ore-Mountains, Germany. The following technologies have been applied jointly thermogravimetric method, frequency domain reflectometry (FDR) sensors, spatial time domain reflectometry (STDR) cluster, ground-penetrating radar (GPR), airborne polarimetric synthetic aperture radar (polarimetric SAR) and advanced synthetic aperture radar (ASAR) based on the satellite Envisat. We present exemplary soil measurement results, with spatial scales ranging from point scale, via hillslope and field scale, to the catchment scale. Only the spatial TDR cluster was able to record continuous data. The other methods are limited to the date of over-flights (airplane and satellite) or measurement campaigns on the ground. For possible use in flood simulation, the observation of soil moisture at multiple scales has to be combined with suitable hydrological modelling, using the hydrological model WaSiM-ETH. Therefore, several simulation experiments have been conducted in order to test both the usability of the recorded soil moisture data and the suitability of a distributed hydrological model to make use of this information. The measurement results show that airborne-based and satellite-based systems in particular provide information on the near-surface spatial distribution. However, there are still a variety of limitations, such as the need for parallel ground measurements (Envisat ASAR), uncertainties in polarimetric decomposition techniques (polarimetric SAR), very limited information from remote sensing methods about vegetated surfaces and the non-availability of continuous measurements. The model experiments showed the importance of soil moisture as an initial condition for physically based flood modelling. However, the observed moisture data reflect the surface or near-surface soil moisture only. Hence, only saturated overland flow might be related to these data. Other flood generation processes influenced by catchment wetness in the subsurface such as subsurface storm flow or quick groundwater drainage cannot be assessed by these data. One has to acknowledge that, in spite of innovative measuring techniques on all spatial scales, soil moisture data for entire vegetated catchments are still today not operationally available. Therefore, observations of soil moisture should primarily be used to improve the quality of continuous, distributed hydrological catchment models that simulate the spatial distribution of moisture internally. Thus, when and where soil moisture data are available, they should be compared with their simulated equivalents in order to improve the parameter estimates and possibly the structure of the hydrological model.