Open Access
Refine
Has Fulltext
- yes (53)
Document Type
- Postprint (53) (remove)
Language
- English (53)
Keywords
- Lake Van (2)
- Uncertainties (2)
- climate change (2)
- climate extremes (2)
- deep biosphere (2)
- events (2)
- imaging spectroscopy (2)
- streamflow (2)
- time-series (2)
- uncertainty (2)
Institute
- Institut für Geowissenschaften (53) (remove)
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
Introducing PebbleCounts
(2019)
Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 ㎡ scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 ㎡ orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, and 0.07 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 ㎡ patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 ㎡ areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 ㎡ ). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
Abstract. The Sea of Marmara, in northwestern Turkey, is a transition zone where the dextral North Anatolian Fault zone (NAFZ) propagates westward from the Anatolian Plate to the Aegean Sea Plate. The area is of interest in the context of seismic hazard of Istanbul, a metropolitan area with about 15 million inhabitants. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the crustal heterogeneities is still missing. To assess if and how crustal heterogeneities are related to the NAFZ segmentation below the Sea of Marmara, we develop new crustal-scale 3-D density models which integrate geological and seismological data and that are additionally constrained by 3-D gravity modeling. For the latter, we use two different gravity datasets including global satellite data and local marine gravity observation. Considering the two different datasets and the general non-uniqueness in potential field modeling, we suggest three possible “end-member” solutions that are all consistent with the observed gravity field and illustrate the spectrum of possible solutions. These models indicate that the observed gravitational anomalies originate from significant density heterogeneities within the crust. Two layers of sediments, one syn-kinematic and one pre-kinematic with respect to the Sea of Marmara formation are underlain by a heterogeneous crystalline crust. A felsic upper crystalline crust (average density of 2720 kgm⁻³) and an intermediate to mafic lower crystalline crust (average density of 2890 kgm⁻³) appear to be cross-cut by two large, dome-shaped mafic highdensity bodies (density of 2890 to 3150 kgm⁻³) of considerable thickness above a rather uniform lithospheric mantle (3300 kgm⁻³). The spatial correlation between two major bends of the main Marmara fault and the location of the highdensity bodies suggests that the distribution of lithological heterogeneities within the crust controls the rheological behavior along the NAFZ and, consequently, maybe influences fault segmentation and thus the seismic hazard assessment in the region.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Determining the optimal grid resolution for topographic analysis on an airborne lidar dataset
(2019)
Digital elevation models (DEMs) are a gridded representation of the surface of the Earth and typically contain uncertainties due to data collection and processing. Slope and aspect estimates on a DEM contain errors and uncertainties inherited from the representation of a continuous surface as a grid (referred to as truncation error; TE) and from any DEM uncertainty. We analyze in detail the impacts of TE and propagated elevation uncertainty (PEU) on slope and aspect.
Using synthetic data as a control, we define functions to quantify both TE and PEU for arbitrary grids. We then develop a quality metric which captures the combined impact of both TE and PEU on the calculation of topographic metrics. Our quality metric allows us to examine the spatial patterns of error and uncertainty in topographic metrics and to compare calculations on DEMs of different sizes and accuracies.
Using lidar data with point density of ∼10 pts m−2 covering Santa Cruz Island in southern California, we are able to generate DEMs and uncertainty estimates at several grid resolutions. Slope (aspect) errors on the 1 m dataset are on average 0.3∘ (0.9∘) from TE and 5.5∘ (14.5∘) from PEU. We calculate an optimal DEM resolution for our SCI lidar dataset of 4 m that minimizes the error bounds on topographic metric calculations due to the combined influence of TE and PEU for both slope and aspect calculations over the entire SCI. Average slope (aspect) errors from the 4 m DEM are 0.25∘ (0.75∘) from TE and 5∘ (12.5∘) from PEU. While the smallest grid resolution possible from the high-density SCI lidar is not necessarily optimal for calculating topographic metrics, high point-density data are essential for measuring DEM uncertainty across a range of resolutions.
The ICDP "PaleoVan" drilling campaign at Lake Van, Turkey, provided a long (> 100 m) record of lacustrine subsurface sedimentary microbial cell abundance. After the ICDP campaign at Potrok Aike, Argentina, this is only the second time deep lacustrine cell counts have been documented. Two sites were cored and revealed a strikingly similar cell distribution despite differences in organic matter content and microbial activity. Although shifted towards higher values, cell counts from Lake Potrok Aike, Argentina, reveal very similar distribution patterns with depth. The lacustrine cell count data are significantly different from published marine records; the most probable cause is differences in sedimentary organic matter composition with marine sediments containing a higher fraction of labile organic matter. Previous studies showed that microbial activity and abundance increase centimetres to metres around geologic interfaces. The finely laminated Lake Van sediment allowed studying this phenomenon on the microscale. We sampled at the scale of individual laminae, and in some depth intervals, we found large differences in microbial abundance between the different laminae. This small-scale heterogeneity is normally overlooked due to much larger sampling intervals that integrate over several centimetres. However, not all laminated intervals exhibit such large differences in microbial abundance, and some non-laminated horizons show large variability on the millimetre scale as well. The reasons for such contrasting observations remain elusive, but indicate that heterogeneity of microbial abundance in subsurface sediments has not been taken into account sufficiently. These findings have implications not just for microbiological studies but for geochemistry as well, as the large differences in microbial abundance clearly show that there are distinct microhabitats that deviate considerably from the surrounding layers.
The semiarid northeast of Brazil is one of the most densely populated dryland regions in the world and recurrently affected by severe droughts. Thus, reliable seasonal forecasts of streamflow and reservoir storage are of high value for water managers. Such forecasts can be generated by applying either hydrological models representing underlying processes or statistical relationships exploiting correlations among meteorological and hydrological variables. This work evaluates and compares the performances of seasonal reservoir storage forecasts derived by a process-based hydrological model and a statistical approach.
Driven by observations, both models achieve similar simulation accuracies. In a hindcast experiment, however, the accuracy of estimating regional reservoir storages was considerably lower using the process-based hydrological model, whereas the resolution and reliability of drought event predictions were similar by both approaches. Further investigations regarding the deficiencies of the process-based model revealed a significant influence of antecedent wetness conditions and a higher sensitivity of model prediction performance to rainfall forecast quality.
Within the scope of this study, the statistical model proved to be the more straightforward approach for predictions of reservoir level and drought events at regionally and monthly aggregated scales. However, for forecasts at finer scales of space and time or for the investigation of underlying processes, the costly initialisation and application of a process-based model can be worthwhile. Furthermore, the application of innovative data products, such as remote sensing data, and operational model correction methods, like data assimilation, may allow for an enhanced exploitation of the advanced capabilities of process-based hydrological models.
We measure valence-to-core x-ray emission spectra of compressed crystalline GeO₂ up to 56 GPa and of amorphous GeO₂ up to 100 GPa. In a novel approach, we extract the Ge coordination number and mean Ge-O distances from the emission energy and the intensity of the Kβ'' emission line. The spectra of high-pressure polymorphs are calculated using the Bethe-Salpeter equation. Trends observed in the experimental and calculated spectra are found to match only when utilizing an octahedral model. The results reveal persistent octahedral Ge coordination with increasing distortion, similar to the compaction mechanism in the sequence of octahedrally coordinated crystalline GeO₂ high-pressure polymorphs.
Extreme weather events are likely to occur more often under climate change and the resulting effects on ecosystems could lead to a further acceleration of climate change. But not all extreme weather events lead to extreme ecosystem response. Here, we focus on hazardous ecosystem behaviour and identify coinciding weather conditions. We use a simple probabilistic risk assessment based on time series of ecosystem behaviour and climate conditions. Given the risk assessment terminology, vulnerability and risk for the previously defined hazard are estimated on the basis of observed hazardous ecosystem behaviour.
We apply this approach to extreme responses of terrestrial ecosystems to drought, defining the hazard as a negative net biome productivity over a 12-month period. We show an application for two selected sites using data for 1981-2010 and then apply the method to the pan-European scale for the same period, based on numerical modelling results (LPJmL for ecosystem behaviour; ERA-Interim data for climate).
Our site-specific results demonstrate the applicability of the proposed method, using the SPEI to describe the climate condition. The site in Spain provides an example of vulnerability to drought because the expected value of the SPEI is 0.4 lower for hazardous than for non-hazardous ecosystem behaviour. In northern Germany, on the contrary, the site is not vulnerable to drought because the SPEI expectation values imply wetter conditions in the hazard case than in the non-hazard case.
At the pan-European scale, ecosystem vulnerability to drought is calculated in the Mediterranean and temperate region, whereas Scandinavian ecosystems are vulnerable under conditions without water shortages. These first model- based applications indicate the conceptual advantages of the proposed method by focusing on the identification of critical weather conditions for which we observe hazardous ecosystem behaviour in the analysed data set. Application of the method to empirical time series and to future climate would be important next steps to test the approach.
In the arctic and high mountains it is common to measure vertical changes of ice sheets and glaciers via digital elevation model (DEM) differencing. This requires the signal of change to outweigh the noise associated with the datasets. Excluding large landslides, on the ice-free earth the land-level change is smaller in vertical magnitude and thus requires more accurate DEMs for differencing and identification of change. Previously, this has required meter to submeter data at small spatial scales. Following careful corrections, we are able to measure land-level changes in gravel-bed channels and steep hillslopes in the south-central Andes using the SRTM-C (collected in 2000) and the TanDEM-X (collected from 2010 to 2015) near-global 12–30m DEMs. Long-standing errors in the SRTM-C are corrected using the TanDEM-X as a control surface and applying cosine-fit co-registration to remove ∼ 1∕10 pixel (∼ 3m) shifts, fast Fourier transform (FFT) and filtering to remove SRTM-C short- and long-wavelength stripes, and blocked shifting to remove remaining complex biases. The datasets are then differenced and outlier pixels are identified as a potential signal for the case of gravel-bed channels and hillslopes. We are able to identify signals of incision and aggradation (with magnitudes down to ∼ 3m in the best case) in two > 100km river reaches, with increased geomorphic activity downstream of knickpoints. Anthropogenic gravel excavation and piling is prominently measured, with magnitudes exceeding ±5m (up to > 10m for large piles). These values correspond to conservative average rates of 0.2 to > 0.5myr−1 for vertical changes in gravel-bed rivers. For hillslopes, since we require stricter cutoffs for noise, we are only able to identify one major landslide in the study area with a deposit volume of 16±0.15×106m3. Additional signals of change can be garnered from TanDEM-X auxiliary layers; however, these are more difficult to quantify. The methods presented can be extended to any region of the world with SRTM-C and TanDEM-X coverage where vertical land-level changes are of interest, with the caveat that remaining vertical uncertainties in primarily the SRTM-C limit detection in steep and complex topography.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
Mapping Damage-Affected Areas after Natural Hazard Events Using Sentinel-1 Coherence Time Series
(2018)
The emergence of the Sentinel-1A and 1B satellites now offers freely available and widely accessible Synthetic Aperture Radar (SAR) data. Near-global coverage and rapid repeat time (6–12 days) gives Sentinel-1 data the potential to be widely used for monitoring the Earth’s surface. Subtle land-cover and land surface changes can affect the phase and amplitude of the C-band SAR signal, and thus the coherence between two images collected before and after such changes. Analysis of SAR coherence therefore serves as a rapidly deployable and powerful tool to track both seasonal changes and rapid surface disturbances following natural disasters. An advantage of using Sentinel-1 C-band radar data is the ability to easily construct time series of coherence for a region of interest at low cost. In this paper, we propose a new method for Potentially Affected Area (PAA) detection following a natural hazard event. Based on the coherence time series, the proposed method (1) determines the natural variability of coherence within each pixel in the region of interest, accounting for factors such as seasonality and the inherent noise of variable surfaces; and (2) compares pixel-by-pixel syn-event coherence to temporal coherence distributions to determine where statistically significant coherence loss has occurred. The user can determine to what degree the syn-event coherence value (e.g., 1st, 5th percentile of pre-event distribution) constitutes a PAA, and integrate pertinent regional data, such as population density, to rank and prioritise PAAs. We apply the method to two case studies, Sarpol-e, Iran following the 2017 Iran-Iraq earthquake, and a landslide-prone region of NW Argentina, to demonstrate how rapid identification and interpretation of potentially affected areas can be performed shortly following a natural hazard event.
Observed recent and expected future increases in frequency and intensity of climatic extremes in central Europe may pose critical challenges for domestic tree species. Continuous dendrometer recordings provide a valuable source of information on tree stem radius variations, offering the possibility to study a tree's response to environmental influences at a high temporal resolution. In this study, we analyze stem radius variations (SRV) of three domestic tree species (beech, oak, and pine) from 2012 to 2014. We use the novel statistical approach of event coincidence analysis (ECA) to investigate the simultaneous occurrence of extreme daily weather conditions and extreme SRVs, where extremes are defined with respect to the common values at a given phase of the annual growth period. Besides defining extreme events based on individual meteorological variables, we additionally introduce conditional and joint ECA as new multivariate extensions of the original methodology and apply them for testing 105 different combinations of variables regarding their impact on SRV extremes. Our results reveal a strong susceptibility of all three species to the extremes of several meteorological variables. Yet, the inter-species differences regarding their response to the meteorological extremes are comparatively low. The obtained results provide a thorough extension of previous correlation-based studies by emphasizing on the timings of climatic extremes only. We suggest that the employed methodological approach should be further promoted in forest research regarding the investigation of tree responses to changing environmental conditions.
Classification of clouds, cirrus, snow, shadows and clear sky areas is a crucial step in the pre-processing of optical remote sensing images and is a valuable input for their atmospheric correction. The Multi-Spectral Imager on board the Sentinel-2's of the Copernicus program offers optimized bands for this task and delivers unprecedented amounts of data regarding spatial sampling, global coverage, spectral coverage, and repetition rate. Efficient algorithms are needed to process, or possibly reprocess, those big amounts of data. Techniques based on top-of-atmosphere reflectance spectra for single-pixels without exploitation of external data or spatial context offer the largest potential for parallel data processing and highly optimized processing throughput. Such algorithms can be seen as a baseline for possible trade-offs in processing performance when the application of more sophisticated methods is discussed. We present several ready-to-use classification algorithms which are all based on a publicly available database of manually classified Sentinel-2A images. These algorithms are based on commonly used and newly developed machine learning techniques which drastically reduce the amount of time needed to update the algorithms when new images are added to the database. Several ready-to-use decision trees are presented which allow to correctly label about 91% of the spectra within a validation dataset. While decision trees are simple to implement and easy to understand, they offer only limited classification skill. It improves to 98% when the presented algorithm based on the classical Bayesian method is applied. This method has only recently been used for this task and shows excellent performance concerning classification skill and processing performance. A comparison of the presented algorithms with other commonly used techniques such as random forests, stochastic gradient descent, or support vector machines is also given. Especially random forests and support vector machines show similar classification skill as the classical Bayesian method.