Open Access
Refine
Has Fulltext
- yes (53)
Document Type
- Postprint (53) (remove)
Language
- English (53)
Keywords
- Lake Van (2)
- Uncertainties (2)
- climate change (2)
- climate extremes (2)
- deep biosphere (2)
- events (2)
- imaging spectroscopy (2)
- streamflow (2)
- time-series (2)
- uncertainty (2)
Institute
- Institut für Geowissenschaften (53) (remove)
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Introducing PebbleCounts
(2019)
Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 ㎡ scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 ㎡ orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, and 0.07 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 ㎡ patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 ㎡ areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 ㎡ ). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
Abstract. The Sea of Marmara, in northwestern Turkey, is a transition zone where the dextral North Anatolian Fault zone (NAFZ) propagates westward from the Anatolian Plate to the Aegean Sea Plate. The area is of interest in the context of seismic hazard of Istanbul, a metropolitan area with about 15 million inhabitants. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the crustal heterogeneities is still missing. To assess if and how crustal heterogeneities are related to the NAFZ segmentation below the Sea of Marmara, we develop new crustal-scale 3-D density models which integrate geological and seismological data and that are additionally constrained by 3-D gravity modeling. For the latter, we use two different gravity datasets including global satellite data and local marine gravity observation. Considering the two different datasets and the general non-uniqueness in potential field modeling, we suggest three possible “end-member” solutions that are all consistent with the observed gravity field and illustrate the spectrum of possible solutions. These models indicate that the observed gravitational anomalies originate from significant density heterogeneities within the crust. Two layers of sediments, one syn-kinematic and one pre-kinematic with respect to the Sea of Marmara formation are underlain by a heterogeneous crystalline crust. A felsic upper crystalline crust (average density of 2720 kgm⁻³) and an intermediate to mafic lower crystalline crust (average density of 2890 kgm⁻³) appear to be cross-cut by two large, dome-shaped mafic highdensity bodies (density of 2890 to 3150 kgm⁻³) of considerable thickness above a rather uniform lithospheric mantle (3300 kgm⁻³). The spatial correlation between two major bends of the main Marmara fault and the location of the highdensity bodies suggests that the distribution of lithological heterogeneities within the crust controls the rheological behavior along the NAFZ and, consequently, maybe influences fault segmentation and thus the seismic hazard assessment in the region.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.