Open Access
Refine
Has Fulltext
- yes (53)
Document Type
- Postprint (53) (remove)
Language
- English (53)
Keywords
- Lake Van (2)
- Uncertainties (2)
- climate change (2)
- climate extremes (2)
- deep biosphere (2)
- events (2)
- imaging spectroscopy (2)
- streamflow (2)
- time-series (2)
- uncertainty (2)
Institute
- Institut für Geowissenschaften (53) (remove)
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Introducing PebbleCounts
(2019)
Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 ㎡ scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 ㎡ orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, and 0.07 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 ㎡ patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 ㎡ areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 ㎡ ). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
Abstract. The Sea of Marmara, in northwestern Turkey, is a transition zone where the dextral North Anatolian Fault zone (NAFZ) propagates westward from the Anatolian Plate to the Aegean Sea Plate. The area is of interest in the context of seismic hazard of Istanbul, a metropolitan area with about 15 million inhabitants. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the crustal heterogeneities is still missing. To assess if and how crustal heterogeneities are related to the NAFZ segmentation below the Sea of Marmara, we develop new crustal-scale 3-D density models which integrate geological and seismological data and that are additionally constrained by 3-D gravity modeling. For the latter, we use two different gravity datasets including global satellite data and local marine gravity observation. Considering the two different datasets and the general non-uniqueness in potential field modeling, we suggest three possible “end-member” solutions that are all consistent with the observed gravity field and illustrate the spectrum of possible solutions. These models indicate that the observed gravitational anomalies originate from significant density heterogeneities within the crust. Two layers of sediments, one syn-kinematic and one pre-kinematic with respect to the Sea of Marmara formation are underlain by a heterogeneous crystalline crust. A felsic upper crystalline crust (average density of 2720 kgm⁻³) and an intermediate to mafic lower crystalline crust (average density of 2890 kgm⁻³) appear to be cross-cut by two large, dome-shaped mafic highdensity bodies (density of 2890 to 3150 kgm⁻³) of considerable thickness above a rather uniform lithospheric mantle (3300 kgm⁻³). The spatial correlation between two major bends of the main Marmara fault and the location of the highdensity bodies suggests that the distribution of lithological heterogeneities within the crust controls the rheological behavior along the NAFZ and, consequently, maybe influences fault segmentation and thus the seismic hazard assessment in the region.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Determining the optimal grid resolution for topographic analysis on an airborne lidar dataset
(2019)
Digital elevation models (DEMs) are a gridded representation of the surface of the Earth and typically contain uncertainties due to data collection and processing. Slope and aspect estimates on a DEM contain errors and uncertainties inherited from the representation of a continuous surface as a grid (referred to as truncation error; TE) and from any DEM uncertainty. We analyze in detail the impacts of TE and propagated elevation uncertainty (PEU) on slope and aspect.
Using synthetic data as a control, we define functions to quantify both TE and PEU for arbitrary grids. We then develop a quality metric which captures the combined impact of both TE and PEU on the calculation of topographic metrics. Our quality metric allows us to examine the spatial patterns of error and uncertainty in topographic metrics and to compare calculations on DEMs of different sizes and accuracies.
Using lidar data with point density of ∼10 pts m−2 covering Santa Cruz Island in southern California, we are able to generate DEMs and uncertainty estimates at several grid resolutions. Slope (aspect) errors on the 1 m dataset are on average 0.3∘ (0.9∘) from TE and 5.5∘ (14.5∘) from PEU. We calculate an optimal DEM resolution for our SCI lidar dataset of 4 m that minimizes the error bounds on topographic metric calculations due to the combined influence of TE and PEU for both slope and aspect calculations over the entire SCI. Average slope (aspect) errors from the 4 m DEM are 0.25∘ (0.75∘) from TE and 5∘ (12.5∘) from PEU. While the smallest grid resolution possible from the high-density SCI lidar is not necessarily optimal for calculating topographic metrics, high point-density data are essential for measuring DEM uncertainty across a range of resolutions.
The ICDP "PaleoVan" drilling campaign at Lake Van, Turkey, provided a long (> 100 m) record of lacustrine subsurface sedimentary microbial cell abundance. After the ICDP campaign at Potrok Aike, Argentina, this is only the second time deep lacustrine cell counts have been documented. Two sites were cored and revealed a strikingly similar cell distribution despite differences in organic matter content and microbial activity. Although shifted towards higher values, cell counts from Lake Potrok Aike, Argentina, reveal very similar distribution patterns with depth. The lacustrine cell count data are significantly different from published marine records; the most probable cause is differences in sedimentary organic matter composition with marine sediments containing a higher fraction of labile organic matter. Previous studies showed that microbial activity and abundance increase centimetres to metres around geologic interfaces. The finely laminated Lake Van sediment allowed studying this phenomenon on the microscale. We sampled at the scale of individual laminae, and in some depth intervals, we found large differences in microbial abundance between the different laminae. This small-scale heterogeneity is normally overlooked due to much larger sampling intervals that integrate over several centimetres. However, not all laminated intervals exhibit such large differences in microbial abundance, and some non-laminated horizons show large variability on the millimetre scale as well. The reasons for such contrasting observations remain elusive, but indicate that heterogeneity of microbial abundance in subsurface sediments has not been taken into account sufficiently. These findings have implications not just for microbiological studies but for geochemistry as well, as the large differences in microbial abundance clearly show that there are distinct microhabitats that deviate considerably from the surrounding layers.
The semiarid northeast of Brazil is one of the most densely populated dryland regions in the world and recurrently affected by severe droughts. Thus, reliable seasonal forecasts of streamflow and reservoir storage are of high value for water managers. Such forecasts can be generated by applying either hydrological models representing underlying processes or statistical relationships exploiting correlations among meteorological and hydrological variables. This work evaluates and compares the performances of seasonal reservoir storage forecasts derived by a process-based hydrological model and a statistical approach.
Driven by observations, both models achieve similar simulation accuracies. In a hindcast experiment, however, the accuracy of estimating regional reservoir storages was considerably lower using the process-based hydrological model, whereas the resolution and reliability of drought event predictions were similar by both approaches. Further investigations regarding the deficiencies of the process-based model revealed a significant influence of antecedent wetness conditions and a higher sensitivity of model prediction performance to rainfall forecast quality.
Within the scope of this study, the statistical model proved to be the more straightforward approach for predictions of reservoir level and drought events at regionally and monthly aggregated scales. However, for forecasts at finer scales of space and time or for the investigation of underlying processes, the costly initialisation and application of a process-based model can be worthwhile. Furthermore, the application of innovative data products, such as remote sensing data, and operational model correction methods, like data assimilation, may allow for an enhanced exploitation of the advanced capabilities of process-based hydrological models.
We measure valence-to-core x-ray emission spectra of compressed crystalline GeO₂ up to 56 GPa and of amorphous GeO₂ up to 100 GPa. In a novel approach, we extract the Ge coordination number and mean Ge-O distances from the emission energy and the intensity of the Kβ'' emission line. The spectra of high-pressure polymorphs are calculated using the Bethe-Salpeter equation. Trends observed in the experimental and calculated spectra are found to match only when utilizing an octahedral model. The results reveal persistent octahedral Ge coordination with increasing distortion, similar to the compaction mechanism in the sequence of octahedrally coordinated crystalline GeO₂ high-pressure polymorphs.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
In the arctic and high mountains it is common to measure vertical changes of ice sheets and glaciers via digital elevation model (DEM) differencing. This requires the signal of change to outweigh the noise associated with the datasets. Excluding large landslides, on the ice-free earth the land-level change is smaller in vertical magnitude and thus requires more accurate DEMs for differencing and identification of change. Previously, this has required meter to submeter data at small spatial scales. Following careful corrections, we are able to measure land-level changes in gravel-bed channels and steep hillslopes in the south-central Andes using the SRTM-C (collected in 2000) and the TanDEM-X (collected from 2010 to 2015) near-global 12–30m DEMs. Long-standing errors in the SRTM-C are corrected using the TanDEM-X as a control surface and applying cosine-fit co-registration to remove ∼ 1∕10 pixel (∼ 3m) shifts, fast Fourier transform (FFT) and filtering to remove SRTM-C short- and long-wavelength stripes, and blocked shifting to remove remaining complex biases. The datasets are then differenced and outlier pixels are identified as a potential signal for the case of gravel-bed channels and hillslopes. We are able to identify signals of incision and aggradation (with magnitudes down to ∼ 3m in the best case) in two > 100km river reaches, with increased geomorphic activity downstream of knickpoints. Anthropogenic gravel excavation and piling is prominently measured, with magnitudes exceeding ±5m (up to > 10m for large piles). These values correspond to conservative average rates of 0.2 to > 0.5myr−1 for vertical changes in gravel-bed rivers. For hillslopes, since we require stricter cutoffs for noise, we are only able to identify one major landslide in the study area with a deposit volume of 16±0.15×106m3. Additional signals of change can be garnered from TanDEM-X auxiliary layers; however, these are more difficult to quantify. The methods presented can be extended to any region of the world with SRTM-C and TanDEM-X coverage where vertical land-level changes are of interest, with the caveat that remaining vertical uncertainties in primarily the SRTM-C limit detection in steep and complex topography.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
Mapping Damage-Affected Areas after Natural Hazard Events Using Sentinel-1 Coherence Time Series
(2018)
The emergence of the Sentinel-1A and 1B satellites now offers freely available and widely accessible Synthetic Aperture Radar (SAR) data. Near-global coverage and rapid repeat time (6–12 days) gives Sentinel-1 data the potential to be widely used for monitoring the Earth’s surface. Subtle land-cover and land surface changes can affect the phase and amplitude of the C-band SAR signal, and thus the coherence between two images collected before and after such changes. Analysis of SAR coherence therefore serves as a rapidly deployable and powerful tool to track both seasonal changes and rapid surface disturbances following natural disasters. An advantage of using Sentinel-1 C-band radar data is the ability to easily construct time series of coherence for a region of interest at low cost. In this paper, we propose a new method for Potentially Affected Area (PAA) detection following a natural hazard event. Based on the coherence time series, the proposed method (1) determines the natural variability of coherence within each pixel in the region of interest, accounting for factors such as seasonality and the inherent noise of variable surfaces; and (2) compares pixel-by-pixel syn-event coherence to temporal coherence distributions to determine where statistically significant coherence loss has occurred. The user can determine to what degree the syn-event coherence value (e.g., 1st, 5th percentile of pre-event distribution) constitutes a PAA, and integrate pertinent regional data, such as population density, to rank and prioritise PAAs. We apply the method to two case studies, Sarpol-e, Iran following the 2017 Iran-Iraq earthquake, and a landslide-prone region of NW Argentina, to demonstrate how rapid identification and interpretation of potentially affected areas can be performed shortly following a natural hazard event.
Observed recent and expected future increases in frequency and intensity of climatic extremes in central Europe may pose critical challenges for domestic tree species. Continuous dendrometer recordings provide a valuable source of information on tree stem radius variations, offering the possibility to study a tree's response to environmental influences at a high temporal resolution. In this study, we analyze stem radius variations (SRV) of three domestic tree species (beech, oak, and pine) from 2012 to 2014. We use the novel statistical approach of event coincidence analysis (ECA) to investigate the simultaneous occurrence of extreme daily weather conditions and extreme SRVs, where extremes are defined with respect to the common values at a given phase of the annual growth period. Besides defining extreme events based on individual meteorological variables, we additionally introduce conditional and joint ECA as new multivariate extensions of the original methodology and apply them for testing 105 different combinations of variables regarding their impact on SRV extremes. Our results reveal a strong susceptibility of all three species to the extremes of several meteorological variables. Yet, the inter-species differences regarding their response to the meteorological extremes are comparatively low. The obtained results provide a thorough extension of previous correlation-based studies by emphasizing on the timings of climatic extremes only. We suggest that the employed methodological approach should be further promoted in forest research regarding the investigation of tree responses to changing environmental conditions.
Classification of clouds, cirrus, snow, shadows and clear sky areas is a crucial step in the pre-processing of optical remote sensing images and is a valuable input for their atmospheric correction. The Multi-Spectral Imager on board the Sentinel-2's of the Copernicus program offers optimized bands for this task and delivers unprecedented amounts of data regarding spatial sampling, global coverage, spectral coverage, and repetition rate. Efficient algorithms are needed to process, or possibly reprocess, those big amounts of data. Techniques based on top-of-atmosphere reflectance spectra for single-pixels without exploitation of external data or spatial context offer the largest potential for parallel data processing and highly optimized processing throughput. Such algorithms can be seen as a baseline for possible trade-offs in processing performance when the application of more sophisticated methods is discussed. We present several ready-to-use classification algorithms which are all based on a publicly available database of manually classified Sentinel-2A images. These algorithms are based on commonly used and newly developed machine learning techniques which drastically reduce the amount of time needed to update the algorithms when new images are added to the database. Several ready-to-use decision trees are presented which allow to correctly label about 91% of the spectra within a validation dataset. While decision trees are simple to implement and easy to understand, they offer only limited classification skill. It improves to 98% when the presented algorithm based on the classical Bayesian method is applied. This method has only recently been used for this task and shows excellent performance concerning classification skill and processing performance. A comparison of the presented algorithms with other commonly used techniques such as random forests, stochastic gradient descent, or support vector machines is also given. Especially random forests and support vector machines show similar classification skill as the classical Bayesian method.
The hydrological budget of a region is determined based on the horizontal and vertical water fluxes acting in both inward and outward directions. These integrated water fluxes vary, altering the total water storage and consequently the gravitational force of the region. The time-dependent gravitational field can be observed through the Gravity Recovery and Climate Experiment (GRACE) gravimetric satellite mission, provided that the mass variation is above the sensitivity of GRACE. This study evaluates mass changes in prominent reservoir regions through three independent approaches viz. fluxes, storages, and gravity, by combining remote sensing products, in-situ data and hydrological model outputs using WaterGAP Global Hydrological Model (WGHM) and Global Land Data Assimilation System (GLDAS). The results show that the dynamics revealed by the GRACE signal can be better explored by a hybrid method, which combines remote sensing-based reservoir volume estimates with hydrological model outputs, than by exclusive model-based storage estimates. For the given arid/ semi-arid regions, GLDAS based storage estimations perform better than WGHM.
The advantages of remote sensing using Unmanned Aerial Vehicles (UAVs) are a high spatial resolution of images, temporal flexibility and narrow-band spectral data from different wavelengths domains. This enables the detection of spatio-temporal dynamics of environmental variables, like plant-related carbon dynamics in agricultural landscapes. In this paper, we quantify spatial patterns of fresh phytomass and related carbon (C) export using imagery captured by a 12-band multispectral camera mounted on the fixed wing UAV Carolo P360. The study was performed in 2014 at the experimental area CarboZALF-D in NE Germany. From radiometrically corrected and calibrated images of lucerne (Medicago sativa), the performance of four commonly used vegetation indices (VIs) was tested using band combinations of six near-infrared bands. The highest correlation between ground-based measurements of fresh phytomass of lucerne and VIs was obtained for the Enhanced Vegetation Index (EVI) using near-infrared band b(899). The resulting map was transformed into dry phytomass and finally upscaled to total C export by harvest. The observed spatial variability at field- and plot-scale could be attributed to small-scale soil heterogeneity in part.
Subsurface microbial communities undertake many terminal electron-accepting processes, often simultaneously. Using a tritium-based assay, we measured the potential hydrogen oxidation catalyzed by hydrogenase enzymes in several subsurface sedimentary environments (Lake Van, Barents Sea, Equatorial Pacific, and Gulf of Mexico) with different predominant electron-acceptors. Hydrogenases constitute a diverse family of enzymes expressed by microorganisms that utilize molecular hydrogen as a metabolic substrate, product, or intermediate. The assay reveals the potential for utilizing molecular hydrogen and allows qualitative detection of microbial activity irrespective of the predominant electron-accepting process. Because the method only requires samples frozen immediately after recovery, the assay can be used for identifying microbial activity in subsurface ecosystems without the need to preserve live material. We measured potential hydrogen oxidation rates in all samples from multiple depths at several sites that collectively span a wide range of environmental conditions and biogeochemical zones. Potential activity normalized to total cell abundance ranges over five orders of magnitude and varies, dependent upon the predominant terminal electron acceptor. Lowest per-cell potential rates characterize the zone of nitrate reduction and highest per-cell potential rates occur in the methanogenic zone. Possible reasons for this relationship to predominant electron acceptor include (i) increasing importance of fermentation in successively deeper biogeochemical zones and (ii) adaptation of H(2)ases to successively higher concentrations of H-2 in successively deeper zones.
Decreasing groundwater levels in many parts of Germany and decreasing low flows in Central Europe have created a need for adaptation measures to stabilize the water balance and to increase low flows. The objective of our study was to estimate the impact of ditch water level management on stream-aquifer interactions in small lowland catchments of the mid-latitudes. The water balance of a ditch-irrigated area and fluxes between the subsurface and the adjacent stream were modeled for three runoff recession periods using the Hydrus-2D software package. The results showed that the subsurface flow to the stream was closely related to the difference between the water level in the ditch system and the stream. Evapotranspiration during the growing season additionally reduced base flow. It was crucial to stop irrigation during a recession period to decrease water withdrawal from the stream and enhance the base flow by draining the irrigated area. Mean fluxes to the stream were between 0.04 and 0.64 ls(-1) for the first 20 days of the low-flow periods. This only slightly increased the flow in the stream, whose mean was 57 ls(-1) during the period with the lowest flows. Larger areas would be necessary to effectively increase flows in mesoscale catchments.
Effects of data and model simplification on the results of a wetland water resource management model
(2016)
This paper presents the development of a wetland water balance model for use in a large river basin with many different wetlands. The basic model was primarily developed for a single wetland with a complex water management system involving large amounts of specialized input data and water management details. The aim was to simplify the model structure and to use only commonly available data as input for the model, with the least possible loss of accuracy. Results from different variants of the model and data adaptation were tested against results from a detailed model. This shows that using commonly available data and unifying and simplifying the input data is tolerable up to a certain level. The simplification of the model has greater effects on the evaluated water balance components than the data adaptation. Because this simplification was necessary for large-scale use, we suggest that, for reasons of comparability, simpler models should always be applied with uniform data bases for large regions, though these should only be moderately simplified. Further, we recommend using these simplified models only for large-scale comparisons and using more specific, detailed models for investigations on smaller scales.
Lake Towuti is a tectonic basin, surrounded by ultramafic rocks. Lateritic soils form through weathering and deliver abundant iron (oxy)hydroxides but very little sulfate to the lake and its sediment. To characterize the sediment biogeochemistry, we collected cores at three sites with increasing water depth and decreasing bottom water oxygen concentrations. Microbial cell densities were highest at the shallow site a feature we attribute to the availability of labile organic matter (OM) and the higher abundance of electron acceptors due to oxic bottom water conditions. At the two other sites, OM degradation and reduction processes below the oxycline led to partial electron acceptor depletion. Genetic information preserved in the sediment as extracellular DNA (eDNA) provided information on aerobic and anaerobic heterotrophs related to Nitrospirae. Chloroflexi, and Therrnoplasmatales. These taxa apparently played a significant role in the degradation of sinking OM. However, eDNA concentrations rapidly decreased with core depth. Despite very low sulfate concentrations, sulfate-reducing bacteria were present and viable in sediments at all three sites, as confirmed by measurement of potential sulfate reduction rates. Microbial community fingerprinting supported the presence of taxa related to Deltaproteobacteria and Firmicutes with demonstrated capacity for iron and sulfate reduction. Concomitantly, sequences of Ruminococcaceae, Clostridiales, and Methanornicrobiales indicated potential for fermentative hydrogen and methane production. Such first insights into ferruginous sediments showed that microbial populations perform successive metabolisms related to sulfur, iron, and methane. In theory, iron reduction could reoxidize reduced sulfur compounds and desorb OM from iron minerals to allow remineralization to methane. Overall, we found that biogeochemical processes in the sediments can be linked to redox differences in the bottom waters of the three sites, like oxidant concentrations and the supply of labile OM. At the scale of the lacustrine record, our geomicrobiological study should provide a means to link the extant subsurface biosphere to past environments.
Abstract. The aim of this study is to investigate the shallow thermal field differences for two differently aged passive continental margins by analyzing regional variations in geothermal gradient and exploring the controlling factors for these variations. Hence, we analyzed two previously published 3-D conductive and lithospheric-scale thermal models of the Southwest African and the Norwegian passive margins. These 3-D models differentiate various sedimentary, crustal, and mantle units and integrate different geophysical data such as seismic observations and the gravity field. We extracted the temperature–depth distributions in 1 km intervals down to 6 km below the upper thermal boundary condition. The geothermal gradient was then calculated for these intervals between the upper thermal boundary condition and the respective depth levels (1, 2, 3, 4, 5, and 6 km below the upper thermal boundary condition). According to our results, the geothermal gradient decreases with increasing depth and shows varying lateral trends and values for these two different margins. We compare the 3-D geological structural models and the geothermal gradient variations for both thermal models and show how radiogenic heat production, sediment insulating effect, and thermal lithosphere–asthenosphere boundary (LAB) depth influence the shallow thermal field pattern. The results indicate an ongoing process of oceanic mantle cooling at the young Norwegian margin compared with the old SW African passive margin that seems to be thermally equilibrated in the present day.
In general, a moderate drying trend is observed in mid-latitude arid Central Asia since the Mid-Holocene, attributed to the progressively weakening influence of the mid-latitude Westerlies on regional climate. However, as the spatio-temporal pattern of this development and the underlying climatic mechanisms are yet not fully understood, new high-resolution paleoclimate records from this region are needed. Within this study, a sediment core from Lake Son Kol (Central Kyrgyzstan) was investigated using sedimentological, (bio) geochemical, isotopic, and palynological analyses, aiming at reconstructing regional climate development during the last 6000 years. Biogeochemical data, mainly reflecting summer moisture conditions, indicate predominantly wet conditions until 4950 cal. yr BP, succeeded by a pronounced dry interval between 4950 and 3900 cal. yr BP. In the following, a return to wet conditions and a subsequent moderate drying trend until present times are observed. This is consistent with other regional paleoclimate records and likely reflects the gradual Late Holocene diminishment of the amount of summer moisture provided by the mid-latitude Westerlies. However, climate impact of the Westerlies was apparently not only restricted to the summer season but also significant during winter as indicated by recurrent episodes of enhanced allochthonous input through snowmelt, occurring before 6000 cal. yr BP and at 5100-4350, 3450-2850, and 1900-1500 cal. yr BP. The distinct similar to 1500year periodicity of these episodes of increased winter precipitation in Central Kyrgyzstan resembles similar cyclicities observed in paleoclimate records around the North Atlantic, likely indicating a hemispheric-scale climatic teleconnection and an impact of North Atlantic Oscillation (NAO) variability in Central Asia.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
(2017)
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km−2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79–0.85. Testing the method for a larger area of 226.3 km−2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all.
Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation.
In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
Water infiltration in soil is not only affected by the inherent heterogeneities of soil, but even more by the interaction with plant roots and their water uptake. Neutron tomography is a unique non-invasive 3D tool to visualize plant root systems together with the soil water distribution in situ. So far, acquisition times in the range of hours have been the major limitation for imaging 3D water dynamics. Implementing an alternative acquisition procedure we boosted the speed of acquisition capturing an entire tomogram within 10 s. This allows, for the first time, tracking of a water front ascending in a rooted soil column upon infiltration of deuterated water time-resolved in 3D. Image quality and resolution could be sustained to a level allowing for capturing the root system in high detail. Good signal-to-noise ratio and contrast were the key to visualize dynamic changes in water content and to localize the root uptake. We demonstrated the ability of ultra-fast tomography to quantitatively image quick changes of water content in the rhizosphere and outlined the value of such imaging data for 3D water uptake modelling. The presented method paves the way for time-resolved studies of various 3D flow and transport phenomena in porous systems
EnGeoMAP 2.0
(2017)
Algorithms for a rapid analysis of hyperspectral data are becoming more and more important with planned next generation spaceborne hyperspectral missions such as the Environmental Mapping and Analysis Program (EnMAP) and the Japanese Hyperspectral Imager Suite (HISUI), together with an ever growing pool of hyperspectral airborne data. The here presented EnGeoMAP 2.0 algorithm is an automated system for material characterization from imaging spectroscopy data, which builds on the theoretical framework of the Tetracorder and MICA (Material Identification and Characterization Algorithm) of the United States Geological Survey and of EnGeoMAP 1.0 from 2013. EnGeoMAP 2.0 includes automated absorption feature extraction, spatio-spectral gradient calculation and mineral anomaly detection. The usage of EnGeoMAP 2.0 is demonstrated at the mineral deposit sites of Rodalquilar (SE-Spain) and Haib River (S-Namibia) using HyMAP and simulated EnMAP data. Results from Hyperion data are presented as supplementary information.
Model-Based attribution of high-resolution streamflow trends in two alpine basins of Western Austria
(2017)
Several trend studies have shown that hydrological conditions are changing considerably in the Alpine region. However, the reasons for these changes are only partially understood and trend analyses alone are not able to shed much light. Hydrological modelling is one possible way to identify the trend drivers, i.e., to attribute the detected streamflow trends, given that the model captures all important processes causing the trends. We modelled the hydrological conditions for two alpine catchments in western Austria (a large, mostly lower-altitude catchment with wide valley plains and a nested high-altitude, glaciated headwater catchment) with the distributed, physically-oriented WaSiM-ETH model, which includes a dynamical glacier module. The model was calibrated in a transient mode, i.e., not only on several standard goodness measures and glacier extents, but also in such a way that the simulated streamflow trends fit with the observed ones during the investigation period 1980 to 2007. With this approach, it was possible to separate streamflow components, identify the trends of flow components, and study their relation to trends in atmospheric variables. In addition to trends in annual averages, highly resolved trends for each Julian day were derived, since they proved powerful in an earlier, data-based attribution study. We were able to show that annual and highly resolved trends can be modelled sufficiently well. The results provide a holistic, year-round picture of the drivers of alpine streamflow changes: Higher-altitude catchments are strongly affected by earlier firn melt and snowmelt in spring and increased ice melt throughout the ablation season. Changes in lower-altitude areas are mostly caused by earlier and lower snowmelt volumes. All highly resolved trends in streamflow and its components show an explicit similarity to the local temperature trends. Finally, results indicate that evapotranspiration has been increasing in the lower altitudes during the study period.
Meteorological extreme events have great potential for damaging railway infrastructure and posing risks to the safety of train passengers. In the future, climate change will presumably have serious implications on meteorological hazards in the Alpine region. Hence, attaining insights on future frequencies of meteorological extremes with relevance for the railway operation in Austria is required in the context of a comprehensive and sustainable natural hazard management plan of the railway operator. In this study, possible impacts of climate change on the frequencies of so-called critical meteorological conditions (CMCs) between the periods 1961-1990 and 2011-2040 are analyzed. Thresholds for such CMCs have been defined by the railway operator and used in its weather monitoring and early warning system. First, the seasonal climate change signals for air temperature and precipitation in Austria are described on the basis of an ensemble of high-resolution Regional Climate Model (RCM) simulations for Europe. Subsequently, the RCM-ensemble was used to investigate changes in the frequency of CMCs. Finally, the sensitivity of results is analyzed with varying threshold values for the CMCs. Results give robust indications for an all-season air temperature rise, but show no clear tendency in average precipitation. The frequency analyses reveal an increase in intense rainfall events and heat waves, whereas heavy snowfall and cold days are likely to decrease. Furthermore, results indicate that frequencies of CMCs are rather sensitive to changes of thresholds. It thus emphasizes the importance to carefully define, validate, andif neededto adapt the thresholds that are used in the weather monitoring and warning system of the railway operator. For this, continuous and standardized documentation of damaging events and near-misses is a pre-requisite.
Human development has far-reaching impacts on the surface of the globe. The transformation of natural land cover occurs in different forms, and urban growth is one of the most eminent transformative processes. We analyze global land cover data and extract cities as defined by maximally connected urban clusters. The analysis of the city size distribution for all cities on the globe confirms Zipf’s law. Moreover, by investigating the percolation properties of the clustering of urban areas we assess the closeness to criticality for various countries. At the critical thresholds, the urban land cover of the countries undergoes a transition from separated clusters to a gigantic component on the country scale. We study the Zipf-exponents as a function of the closeness to percolation and find a systematic dependence, which could be the reason for deviating exponents reported in the literature. Moreover, we investigate the average size of the clusters as a function of the proximity to percolation and find country specific behavior. By relating the standard deviation and the average of cluster sizes—analogous to Taylor’s law—we suggest an alternative way to identify the percolation transition. We calculate spatial correlations of the urban land cover and find long-range correlations. Finally, by relating the areas of cities with population figures we address the global aspect of the allometry of cities, finding an exponent δ ≈ 0.85, i.e., large cities have lower densities.
Pluvial floods have caused severe damage to urban areas in recent years. With a projected increase in extreme precipitation as well as an ongoing urbanization, pluvial flood damage is expected to increase in the future. Therefore, further insights, especially on the adverse consequences of pluvial floods and their mitigation, are needed. To gain more knowledge, empirical damage data from three different pluvial flood events in Germany were collected through computer-aided telephone interviews. Pluvial flood awareness as well as flood experience were found to be low before the respective flood events. The level of private precaution increased considerably after all events, but is mainly focused on measures that are easy to implement. Lower inundation depths, smaller potential losses as compared with fluvial floods, as well as the fact that pluvial flooding may occur everywhere, are expected to cause a shift in damage mitigation from precaution to emergency response. However, an effective implementation of emergency measures was constrained by a low dissemination of early warnings in the study areas. Further improvements of early warning systems including dissemination as well as a rise in pluvial flood preparedness are important to reduce future pluvial flood damage.
In light of possible future restrictions on the use of fossil fuel, due to climate change obligations and continuous depletion of global fossil fuel reserves, the search for alternative renewable energy sources is expected to be an issue of great concern for policy stakeholders. This study assessed the feasibility of bioenergy production under relatively low-intensity conservative, eco-agricultural settings (as opposed to those produced under high-intensity, fossil fuel based industrialized agriculture). Estimates of the net energy gain (NEG) and the energy return on energy invested (EROEI) obtained from a life cycle inventory of the energy inputs and outputs involved reveal that the energy efficiency of bioenergy produced in low-intensity eco-agricultural systems could be as much as much as 448.5–488.3 GJ·ha−1 of NEG and an EROEI of 5.4–5.9 for maize ethanol production systems, and as much as 155.0–283.9 GJ·ha−1 of NEG and an EROEI of 14.7–22.4 for maize biogas production systems. This is substantially higher than for industrialized agriculture with a NEG of 2.8–52.5 GJ·ha−1 and an EROEI of 1.2–1.7 for maize ethanol production systems, as well as a NEG of 59.3–188.7 GJ·ha−1 and an EROEI of 2.2–10.2 for maize biogas production systems. Bioenergy produced in low-intensity eco-agricultural systems could therefore be an important source of energy with immense net benefits for local and regional end-users, provided a more efficient use of the co-products is ensured.
In this study, an in situ application for identifying neodymium (Nd) enriched surface materials that uses multitemporal hyperspectral images is presented (HySpex sensor). Because of the narrow shape and shallow absorption depth of the neodymium absorption feature, a method was developed for enhancing and extracting the necessary information for neodymium from image spectra, even under illumination conditions that are not optimal. For this purpose, the two following approaches were developed: (1) reducing noise and analyzing changing illumination conditions by averaging multitemporal image scenes and (2) enhancing the depth of the desired absorption band by deconvolving every image spectrum with a Gaussian curve while the rest of the spectrum remains unchanged (Richardson-Lucy deconvolution). To evaluate these findings, nine field samples from the Fen complex in Norway were analyzed using handheld X-ray fluorescence devices and by conducting detailed laboratory-based geochemical rare earth element determinations. The result is a qualitative outcrop map that highlights zones that are enriched in neodymium. To reduce the influences of non-optimal illumination, particularly at the studied site, a minimum of seven single acquisitions is required. Sharpening the neodymium absorption band allows for robust mapping, even at the outer zones of enrichment. From the geochemical investigations, we found that iron oxides decrease the applicability of the method. However, iron-related absorption bands can be used as secondary indicators for sulfidic ore zones that are mainly enriched with rare earth elements. In summary, we found that hyperspectral spectroscopy is a noninvasive, fast and cost-saving method for determining neodymium at outcrop surfaces
Remote sensing technology serves as a powerful tool for analyzing geospatial characteristics of flood inundation events at various scales. However, the performance of remote sensing methods depends heavily on the flood characteristics and landscape settings. Difficulties might be encountered in mapping the extent of localized flooding with shallow water on riverine floodplain areas, where patches of herbaceous vegetation are interspersed with open water surfaces. To address the difficulties in mapping inundation on areas with complex water and vegetation compositions, a high spatial resolution dataset has to be used to reduce the problem of mixed pixels. The main objective of our study was to investigate the possibilities of using a single date WorldView-2 image of very high spatial resolution and supporting data to analyze spatial patterns of localized flooding on a riverine floodplain. We used a decision tree algorithm with various combinations of input variables including spectral bands of the WorldView-2 image, selected spectral indices dedicated to mapping water surfaces and vegetation, and topographic data. The overall accuracies of the twelve flood extent maps derived with the decision tree method and performed on both pixels and image objects ranged between 77% and 95%. The highest mapping overall accuracy was achieved with a method that utilized all available input data and the object-based image analysis. Our study demonstrates the possibility of using single date WorldView-2 data for analyzing flooding events at high spatial detail despite the absence of spectral bands from the short-waveform region that are frequently used in water related studies. Our study also highlights the importance of topographic data in inundation analyses. The greatest difficulties were met in mapping water surfaces under dense canopy herbaceous vegetation, due to limited water surface exposure and the dominance of vegetation reflectance.
Climate or land use?
(2017)
This study intends to contribute to the ongoing discussion on whether land use and land cover changes (LULC) or climate trends have the major influence on the observed increase of flood magnitudes in the Sahel. A simulation-based approach is used for attributing the observed trends to the postulated drivers. For this purpose, the ecohydrological model SWIM (Soil and Water Integrated Model) with a new, dynamic LULC module was set up for the Sahelian part of the Niger River until Niamey, including the main tributaries Sirba and Goroul. The model was driven with observed, reanalyzed climate and LULC data for the years 1950–2009. In order to quantify the shares of influence, one simulation was carried out with constant land cover as of 1950, and one including LULC. As quantitative measure, the gradients of the simulated trends were compared to the observed trend. The modeling studies showed that for the Sirba River only the simulation which included LULC was able to reproduce the observed trend. The simulation without LULC showed a positive trend for flood magnitudes, but underestimated the trend significantly. For the Goroul River and the local flood of the Niger River at Niamey, the simulations were only partly able to reproduce the observed trend. In conclusion, the new LULC module enabled some first quantitative insights into the relative influence of LULC and climatic changes. For the Sirba catchment, the results imply that LULC and climatic changes contribute in roughly equal shares to the observed increase in flooding. For the other parts of the subcatchment, the results are less clear but show, that climatic changes and LULC are drivers for the flood increase; however their shares cannot be quantified. Based on these modeling results, we argue for a two-pillar adaptation strategy to reduce current and future flood risk: Flood mitigation for reducing LULC-induced flood increase, and flood adaptation for a general reduction of flood vulnerability.
This study focuses on evaluating the potential of ALOS/PALSAR time-series data to analyze the activation of deep-seated landslides in the foothill zone of the high mountain Alai range in the southern Tien Shan (Kyrgyzstan). Most previous field-based landslide investigations have revealed that many landslides have indicators for ongoing slow movements in the form of migrating and newly developing cracks. L-band ALOS/PALSAR data for the period between 2007 and 2010 are available for the 484 km2 area in this study. We analyzed these data using the Small Baseline Subset (SBAS) time-series technique to assess the surface deformation related to the activation of landslides. We observed up to ±17 mm/year of LOS velocity deformation rates, which were projected along the local steepest slope and resulted in velocity rates of up to −63 mm/year. The obtained rates indicate very slow movement of the deep-seated landslides during the observation time. We also compared these movements with precipitation and earthquake records. The results suggest that the deformation peaks correlate with rainfall in the 3 preceding months and with an earthquake event. Overall, the results of this study indicated the great potential of L-band InSAR time series analysis for efficient spatiotemporal identification and monitoring of slope activations in this region of high landslide activity in Southern Kyrgyzstan.
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000m of elevation. For the 30m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12m TanDEM-X and 5m ALOSWorld 3D having < 2m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12–30 m), and ALOS World 3D (5–30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10m DEMs and the 30m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30m SRTM-C, 12–30m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X, and 5m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m=n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5m ALOS World 3D DEM, which demonstrated high-frequency noise in 2–8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.
We present new experimental data of the low-temperature metastable region of liquid water derived from high-density synthetic fluid inclusions (996–916 kg m−3) in quartz. Microthermometric measurements include: (i) prograde (upon heating) and retrograde (upon cooling) liquid–vapour homogenisation. We used single ultrashort laser pulses to stimulate vapour bubble nucleation in initially monophase liquid inclusions. Water densities were calculated based on prograde homogenisation temperatures using the IAPWS-95 formulation. We found retrograde liquid–vapour homogenisation temperatures in excellent agreement with IAPWS-95. (ii) Retrograde ice nucleation. Raman spectroscopy was used to determine the nucleation of ice in the absence of the vapour bubble. Our ice nucleation data in the doubly metastable region are inconsistent with the low-temperature trend of the spinodal predicted by IAPWS-95, as liquid water with a density of 921 kg m−3 remains in a homogeneous state during cooling down to a temperature of −30.5 °C, where it is transformed into ice whose density corresponds to zero pressure. (iii) Ice melting. Ice melting temperatures of up to 6.8 °C were measured in the absence of the vapour bubble, i.e. in the negative pressure region. (iv) Spontaneous retrograde and, for the first time, prograde vapour bubble nucleation. Prograde bubble nucleation occurred upon heating at temperatures above ice melting. The occurrence of prograde and retrograde vapour bubble nucleation in the same inclusions indicates a maximum of the bubble nucleation curve in the ϱ–T plane at around 40 °C. The new experimental data represent valuable benchmarks to evaluate and further improve theoretical models describing the p–V–T properties of metastable water in the low-temperature region.
The information about climate change impact on river discharge is vitally important for planning adaptation measures. The future changes can affect different water-related sectors. The main goal of this study was to investigate the potential water resource changes in Ukraine, focusing on three mesoscale river catchments (Teteriv, UpperWestern Bug, and Samara) characteristic for different geographical zones. The catchment scale watershed model—Soil and Water Integrated Model (SWIM)—was setup, calibrated, and validated for the three catchments under consideration. A set of seven GCM-RCM (General Circulation Model-Regional Climate Model) coupled climate scenarios corresponding to RCPs (Representative Concentration Pathways) 4.5 and 8.5 were used to drive the hydrological catchment model. The climate projections, used in the study, were considered as three combinations of low, intermediate, and high end scenarios. Our results indicate the shifts in the seasonal distribution of runoff in all three catchments. The spring high flow occurs earlier as a result of temperature increases and earlier snowmelt. The fairly robust trend is an increase in river discharge in the winter season, and most of the scenarios show a potential decrease in river discharge in the spring.
Widespread flooding in June 2013 caused damage costs of €6 to 8 billion in Germany, and awoke many memories of the floods in August 2002, which resulted in total damage of €11.6 billion and hence was the most expensive natural hazard event in Germany up to now. The event of 2002 does, however, also mark a reorientation toward an integrated flood risk management system in Germany. Therefore, the flood of 2013 offered the opportunity to review how the measures that politics, administration, and civil society have implemented since 2002 helped to cope with the flood and what still needs to be done to achieve effective and more integrated flood risk management. The review highlights considerable improvements on many levels, in particular (1) an increased consideration of flood hazards in spatial planning and urban development, (2) comprehensive property-level mitigation and preparedness measures, (3) more effective flood warnings and improved coordination of disaster response, and (4) a more targeted maintenance of flood defense systems. In 2013, this led to more effective flood management and to a reduction of damage. Nevertheless, important aspects remain unclear and need to be clarified. This particularly holds for balanced and coordinated strategies for reducing and overcoming the impacts of flooding in large catchments, cross-border and interdisciplinary cooperation, the role of the general public in the different phases of flood risk management, as well as a transparent risk transfer system. Recurring flood events reveal that flood risk management is a continuous task. Hence, risk drivers, such as climate change, land-use changes, economic developments, or demographic change and the resultant risks must be investigated at regular intervals, and risk reduction strategies and processes must be reassessed as well as adapted and implemented in a dialogue with all stakeholders.
In June 2013, widespread flooding and consequent damage and losses occurred in Central Europe, especially in Germany. This paper explores what data are available to investigate the adverse impacts of the event, what kind of information can be retrieved from these data and how well data and information fulfil requirements that were recently proposed for disaster reporting on the European and international levels. In accordance with the European Floods Directive (2007/60/EC), impacts on human health, economic activities (and assets), cultural heritage and the environment are described on the national and sub-national scale. Information from governmental reports is complemented by communications on traffic disruptions and surveys of flood-affected residents and companies.
Overall, the impacts of the flood event in 2013 were manifold. The study reveals that flood-affected residents suffered from a large range of impacts, among which mental health and supply problems were perceived more seriously than financial losses. The most frequent damage type among affected companies was business interruption. This demonstrates that the current scientific focus on direct (financial) damage is insufficient to describe the overall impacts and severity of flood events.
The case further demonstrates that procedures and standards for impact data collection in Germany are widely missing. Present impact data in Germany are fragmentary, heterogeneous, incomplete and difficult to access. In order to fulfil, for example, the monitoring and reporting requirements of the Sendai Framework for Disaster Risk Reduction 2015–2030 that was adopted in March 2015 in Sendai, Japan, more efforts on impact data collection are needed.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
In a recent BAMS article, it is argued that community-based Open Source Software (OSS) could foster scientific progress in weather radar research, and make weather radar software more affordable, flexible, transparent, sustainable, and interoperable.
Nevertheless, it can be challenging for potential developers and users to realize these benefits: tools are often cumbersome to install; different operating systems may have particular issues, or may not be supported at all; and many tools have steep learning curves.
To overcome some of these barriers, we present an open, community-based virtual machine (VM). This VM can be run on any operating system, and guarantees reproducibility of results across platforms. It contains a suite of independent OSS weather radar tools (BALTRAD, Py-ART, wradlib, RSL, and Radx), and a scientific Python stack. Furthermore, it features a suite of recipes that work out of the box and provide guidance on how to use the different OSS tools alone and together. The code to build the VM from source is hosted on GitHub, which allows the VM to grow with its community.
We argue that the VM presents another step toward Open (Weather Radar) Science. It can be used as a quick way to get started, for teaching, or for benchmarking and combining different tools. It can foster the idea of reproducible research in scientific publishing. Being scalable and extendable, it might even allow for real-time data processing.
We expect the VM to catalyze progress toward interoperability, and to lower the barrier for new users and developers, thus extending the weather radar community and user base.
The results of streamflow trend studies are often characterized by mostly insignificant trends and inexplicable spatial patterns. In our study region, Western Austria, this applies especially for trends of annually averaged runoff. However, analysing the altitudinal aspect, we found that there is a trend gradient from higher-altitude to lower-altitude stations, i.e. a pattern of mostly positive annual trends at higher stations and negative ones at lower stations. At midaltitudes, the trends are mostly insignificant. Here we hypothesize that the streamflow trends are caused by the following two main processes: on the one hand, melting glaciers produce excess runoff at higher-altitude watersheds. On the other hand, rising temperatures potentially alter hydrological conditions in terms of less snowfall, higher infiltration, enhanced evapotranspiration, etc., which in turn results in decreasing streamflow trends at lower-altitude watersheds. However, these patterns are masked at mid-altitudes because the resulting positive and negative trends balance each other. To support these hypotheses, we attempted to attribute the detected trends to specific causes. For this purpose, we analysed trends of filtered daily streamflow data, as the causes for these changes might be restricted to a smaller temporal scale than the annual one. This allowed for the explicit determination of the exact days of year (DOYs) when certain streamflow trends emerge, which were then linked with the corresponding DOYs of the trends and characteristic dates of other observed variables, e.g. the average DOY when temperature crosses the freezing point in spring. Based on these analyses, an empirical statistical model was derived that was able to simulate daily streamflow trends sufficiently well. Analyses of subdaily streamflow changes provided additional insights. Finally, the present study supports many modelling approaches in the literature which found out that the main drivers of alpine streamflow changes are increased glacial melt, earlier snowmelt and lower snow accumulation in wintertime.
Climate change is likely to impact the seasonality and generation processes of floods in the Nordic countries, which has direct implications for flood risk assessment, design flood estimation, and hydropower production management. Using a multi-model/multi-parameter approach to simulate daily discharge for a reference (1961–1990) and a future (2071–2099) period, we analysed the projected changes in flood seasonality and generation processes in six catchments with mixed snowmelt/rainfall regimes under the current climate in Norway. The multi-model/multi-parameter ensemble consists of (i) eight combinations of global and regional climate models, (ii) two methods for adjusting the climate model output to the catchment scale, and (iii) one conceptual hydrological model with 25 calibrated parameter sets. Results indicate that autumn/winter events become more frequent in all catchments considered, which leads to an intensification of the current autumn/winter flood regime for the coastal catchments, a reduction of the dominance of spring/summer flood regimes in a high-mountain catchment, and a possible systematic shift in the current flood regimes from spring/summer to autumn/winter in the two catchments located in northern and south-eastern Norway. The changes in flood regimes result from increasing event magnitudes or frequencies, or a combination of both during autumn and winter. Changes towards more dominant autumn/winter events correspond to an increasing relevance of rainfall as a flood generating process (FGP) which is most pronounced in those catchments with the largest shifts in flood seasonality. Here, rainfall replaces snowmelt as the dominant FGP primarily due to increasing temperature.We further analysed the ensemble components in contributing to overall uncertainty in the projected changes and found that the climate projections and the methods for downscaling or bias correction tend to be the largest contributors. The relative role of hydrological parameter uncertainty, however, is highest for those catchments showing the largest changes in flood seasonality, which confirms the lack of robustness in hydrological model parameterization for simulations under transient hydrometeorological conditions.