Publikationsfonds der Universität Potsdam
Refine
Has Fulltext
- no (52)
Document Type
- Article (52)
Is part of the Bibliography
- yes (52)
Keywords
- Uncertainties (2)
- extreme rainfall (2)
- permafrost (2)
- ALOS World 3D (1)
- ASTER GDEM (1)
- Accuracy Asseessment (1)
- Accuracy Assessment (1)
- AgI (1)
- Algorithm (1)
- Arctic Ocean (1)
Institute
- Institut für Geowissenschaften (52) (remove)
Abstract. The Sea of Marmara, in northwestern Turkey, is a transition zone where the dextral North Anatolian Fault zone (NAFZ) propagates westward from the Anatolian Plate to the Aegean Sea Plate. The area is of interest in the context of seismic hazard of Istanbul, a metropolitan area with about 15 million inhabitants. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the crustal heterogeneities is still missing. To assess if and how crustal heterogeneities are related to the NAFZ segmentation below the Sea of Marmara, we develop new crustal-scale 3-D density models which integrate geological and seismological data and that are additionally constrained by 3-D gravity modeling. For the latter, we use two different gravity datasets including global satellite data and local marine gravity observation. Considering the two different datasets and the general non-uniqueness in potential field modeling, we suggest three possible “end-member” solutions that are all consistent with the observed gravity field and illustrate the spectrum of possible solutions. These models indicate that the observed gravitational anomalies originate from significant density heterogeneities within the crust. Two layers of sediments, one syn-kinematic and one pre-kinematic with respect to the Sea of Marmara formation are underlain by a heterogeneous crystalline crust. A felsic upper crystalline crust (average density of 2720 kgm⁻³) and an intermediate to mafic lower crystalline crust (average density of 2890 kgm⁻³) appear to be cross-cut by two large, dome-shaped mafic highdensity bodies (density of 2890 to 3150 kgm⁻³) of considerable thickness above a rather uniform lithospheric mantle (3300 kgm⁻³). The spatial correlation between two major bends of the main Marmara fault and the location of the highdensity bodies suggests that the distribution of lithological heterogeneities within the crust controls the rheological behavior along the NAFZ and, consequently, maybe influences fault segmentation and thus the seismic hazard assessment in the region.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
High Mountain Asia (HMA) is dependent upon both the amount and timing of snow and glacier meltwater. Previous model studies and coarse resolution (0.25° × 0.25°, ∼25 km × 25 km) passive microwave assessments of trends in the volume and timing of snowfall, snowmelt, and glacier melt in HMA have identified key spatial and seasonal heterogeneities in the response of snow to changes in regional climate. Here we use recently developed, continuous, internally consistent, and high-resolution passive microwave data (3.125 km × 3.125 km, 1987–2016) from the special sensor microwave imager instrument family to refine and extend previous estimates of changes in the snow regime of HMA. We find an overall decline in snow volume across HMA; however, there exist spatially contiguous regions of increasing snow volume—particularly during the winter season in the Pamir, Karakoram, Hindu Kush, and Kunlun Shan. Detailed analysis of changes in snow-volume trends through time reveal a large step change from negative trends during the period 1987–1997, to much more positive trends across large regions of HMA during the periods 1997–2007 and 2007–2016. We also find that changes in high percentile monthly snow-water volume exhibit steeper trends than changes in low percentile snow-water volume, which suggests a reduction in the frequency of high snow-water volumes in much of HMA. Regions with positive snow-water storage trends generally correspond to regions of positive glacier mass balances.
The information about climate change impact on river discharge is vitally important for planning adaptation measures. The future changes can affect different water-related sectors. The main goal of this study was to investigate the potential water resource changes in Ukraine, focusing on three mesoscale river catchments (Teteriv, UpperWestern Bug, and Samara) characteristic for different geographical zones. The catchment scale watershed model—Soil and Water Integrated Model (SWIM)—was setup, calibrated, and validated for the three catchments under consideration. A set of seven GCM-RCM (General Circulation Model-Regional Climate Model) coupled climate scenarios corresponding to RCPs (Representative Concentration Pathways) 4.5 and 8.5 were used to drive the hydrological catchment model. The climate projections, used in the study, were considered as three combinations of low, intermediate, and high end scenarios. Our results indicate the shifts in the seasonal distribution of runoff in all three catchments. The spring high flow occurs earlier as a result of temperature increases and earlier snowmelt. The fairly robust trend is an increase in river discharge in the winter season, and most of the scenarios show a potential decrease in river discharge in the spring.
Quantitative geomorphic research depends on accurate topographic data often collected via remote sensing. Lidar, and photogrammetric methods like structure-from-motion, provide the highest quality data for generating digital elevation models (DEMs). Unfortunately, these data are restricted to relatively small areas, and may be expensive or time-consuming to collect. Global and near-global DEMs with 1 arcsec (∼30 m) ground sampling from spaceborne radar and optical sensors offer an alternative gridded, continuous surface at the cost of resolution and accuracy. Accuracy is typically defined with respect to external datasets, often, but not always, in the form of point or profile measurements from sources like differential Global Navigation Satellite System (GNSS), spaceborne lidar (e.g., ICESat), and other geodetic measurements. Vertical point or profile accuracy metrics can miss the pixel-to-pixel variability (sometimes called DEM noise) that is unrelated to true topographic signal, but rather sensor-, orbital-, and/or processing-related artifacts. This is most concerning in selecting a DEM for geomorphic analysis, as this variability can affect derivatives of elevation (e.g., slope and curvature) and impact flow routing. We use (near) global DEMs at 1 arcsec resolution (SRTM, ASTER, ALOS, TanDEM-X, and the recently released Copernicus) and develop new internal accuracy metrics to assess inter-pixel variability without reference data. Our study area is in the arid, steep Central Andes, and is nearly vegetation-free, creating ideal conditions for remote sensing of the bare-earth surface. We use a novel hillshade-filtering approach to detrend long-wavelength topographic signals and accentuate short-wavelength variability. Fourier transformations of the spatial signal to the frequency domain allows us to quantify: 1) artifacts in the un-projected 1 arcsec DEMs at wavelengths greater than the Nyquist (twice the nominal resolution, so > 2 arcsec); and 2) the relative variance of adjacent pixels in DEMs resampled to 30-m resolution (UTM projected). We translate results into their impact on hillslope and channel slope calculations, and we highlight the quality of the five DEMs. We find that the Copernicus DEM, which is based on a carefully edited commercial version of the TanDEM-X, provides the highest quality landscape representation, and should become the preferred DEM for topographic analysis in areas without sufficient coverage of higher-quality local DEMs.
Water infiltration in soil is not only affected by the inherent heterogeneities of soil, but even more by the interaction with plant roots and their water uptake. Neutron tomography is a unique non-invasive 3D tool to visualize plant root systems together with the soil water distribution in situ. So far, acquisition times in the range of hours have been the major limitation for imaging 3D water dynamics. Implementing an alternative acquisition procedure we boosted the speed of acquisition capturing an entire tomogram within 10 s. This allows, for the first time, tracking of a water front ascending in a rooted soil column upon infiltration of deuterated water time-resolved in 3D. Image quality and resolution could be sustained to a level allowing for capturing the root system in high detail. Good signal-to-noise ratio and contrast were the key to visualize dynamic changes in water content and to localize the root uptake. We demonstrated the ability of ultra-fast tomography to quantitatively image quick changes of water content in the rhizosphere and outlined the value of such imaging data for 3D water uptake modelling. The presented method paves the way for time-resolved studies of various 3D flow and transport phenomena in porous systems.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels.
Determining the optimal grid resolution for topographic analysis on an airborne lidar dataset
(2019)
Digital elevation models (DEMs) are a gridded representation of the surface of the Earth and typically contain uncertainties due to data collection and processing. Slope and aspect estimates on a DEM contain errors and uncertainties inherited from the representation of a continuous surface as a grid (referred to as truncation error; TE) and from any DEM uncertainty. We analyze in detail the impacts of TE and propagated elevation uncertainty (PEU) on slope and aspect.
Using synthetic data as a control, we define functions to quantify both TE and PEU for arbitrary grids. We then develop a quality metric which captures the combined impact of both TE and PEU on the calculation of topographic metrics. Our quality metric allows us to examine the spatial patterns of error and uncertainty in topographic metrics and to compare calculations on DEMs of different sizes and accuracies.
Using lidar data with point density of ∼10 pts m−2 covering Santa Cruz Island in southern California, we are able to generate DEMs and uncertainty estimates at several grid resolutions. Slope (aspect) errors on the 1 m dataset are on average 0.3∘ (0.9∘) from TE and 5.5∘ (14.5∘) from PEU. We calculate an optimal DEM resolution for our SCI lidar dataset of 4 m that minimizes the error bounds on topographic metric calculations due to the combined influence of TE and PEU for both slope and aspect calculations over the entire SCI. Average slope (aspect) errors from the 4 m DEM are 0.25∘ (0.75∘) from TE and 5∘ (12.5∘) from PEU. While the smallest grid resolution possible from the high-density SCI lidar is not necessarily optimal for calculating topographic metrics, high point-density data are essential for measuring DEM uncertainty across a range of resolutions.
The Arctic is greatly affected by climate change. Increasing air temperatures drive permafrost thaw and an increase in coastal erosion and river discharge. This results in a greater input of sediment and organic matter into nearshore waters, impacting ecosystems by reducing light transmission through the water column and altering biogeochemistry. This potentially results in impacts on the subsistence economy of local people as well as the climate due to the transformation of suspended organic matter into greenhouse gases. Even though the impacts of increased suspended sediment concentrations and turbidity in the Arctic nearshore zone are well-studied, the mechanisms underpinning this increase are largely unknown. Wave energy and tides drive the level of turbidity in the temperate and tropical parts of the world, and this is generally assumed to also be the case in the Arctic. However, the tidal range is considerably lower in the Arctic, and processes related to the occurrence of permafrost have the potential to greatly contribute to nearshore turbidity. In this study, we use high-resolution satellite imagery alongside in situ and ERA5 reanalysis data of ocean and climate variables in order to identify the drivers of nearshore turbidity, along with its seasonality in the nearshore waters of Herschel Island Qikiqtaruk, in the western Canadian Arctic. Nearshore turbidity correlates well to wind direction, wind speed, significant wave height, and wave period. Nearshore turbidity is superiorly correlated to wind speed at the Beaufort Shelf compared to in situ measurements at Herschel Island Qikiqtaruk, showing that nearshore turbidity, albeit being of limited spatial extent, is influenced by large-scale weather and ocean phenomenons. We show that, in contrast to the temperate and tropical ocean, freshly eroded material is the predominant driver of nearshore turbidity in the Arctic, rather than resuspension, which is caused by the vulnerability of permafrost coasts to thermo-erosion.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
We explore the potential of spaceborne radar (SR) observations from the Ku-band precipitation radars onboard the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) satellites as a reference to quantify the ground radar (GR) reflectivity bias. To this end, the 3-D volume-matching algorithm proposed by Schwaller and Morris (2011) is implemented and applied to 5 years (2012–2016) of observations. We further extend the procedure by a framework to take into account the data quality of each ground radar bin. Through these methods, we are able to assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a quality-weighted average of reflectivity differences in any sample of matching GR–SR volumes. We exemplify the idea of quality-weighted averaging by using the beam blockage fraction as the basis of a quality index. As a result, we can increase the consistency of SR and GR observations, and thus the precision of calibration bias estimates. The remaining scatter between GR and SR reflectivity as well as the variability of bias estimates between overpass events indicate, however, that other error sources are not yet fully addressed. Still, our study provides a framework to introduce any other quality variables that are considered relevant in a specific context. The code that implements our analysis is based on the wradlib open-source software library, and is, together with the data, publicly available to monitor radar calibration or to scrutinize long series of archived radar data back to December 1997, when TRMM became operational.
Sea level rise and coastal erosion have inundated large areas of Arctic permafrost. Submergence by warm and saline waters increases the rate of inundated permafrost thaw compared to sub-aerial thawing on land. Studying the contact between the unfrozen and frozen sediments below the seabed, also known as the ice-bearing permafrost table (IBPT), provides valuable information to understand the evolution of sub-aquatic permafrost, which is key to improving and understanding coastal erosion prediction models and potential greenhouse gas emissions. In this study, we use data from 2D electrical resistivity tomography (ERT) collected in the nearshore coastal zone of two Arctic regions that differ in their environmental conditions (e.g., seawater depth and resistivity) to image and study the subsea permafrost. The inversion of 2D ERT data sets is commonly performed using deterministic approaches that favor smoothed solutions, which are typically interpreted using a user-specified resistivity threshold to identify the IBPT position. In contrast, to target the IBPT position directly during inversion, we use a layer-based model parameterization and a global optimization approach to invert our ERT data. This approach results in ensembles of layered 2D model solutions, which we use to identify the IBPT and estimate the resistivity of the unfrozen and frozen sediments, including estimates of uncertainties. Additionally, we globally invert 1D synthetic resistivity data and perform sensitivity analyses to study, in a simpler way, the correlations and influences of our model parameters. The set of methods provided in this study may help to further exploit ERT data collected in such permafrost environments as well as for the design of future field experiments.
The Permo-Triassic period marks the time interval between Hercynian (Variscan) orogenic events in the Tien Shan and the North Pamir, and the Cimmerian accretion of the Gondwana-derived Central and South Pamir to the southern margin of the Paleo-Asian continent. A well-preserved Permo-Triassic volcano-sedimentary sequence from the Chinese North Pamir yields important information on the geodynamic evolution of Asia’s pre-Cimmerian southern margin. The oldest volcanic rocks from that section are dated to the late Guadalupian epoch by a rhyolite and a dacitic dike that gave zircon U-Pb ages of ~260 Ma. Permian volcanism was largely pyroclastic and mafic to intermediate. Upsection, a massive ignimbritic crystal tuff in the Chinese Qimgan valley was dated to 244.1 +/- 1.1 Ma, a similar unit in the nearby Gez valley to 245 +/- 11 Ma, and an associated rhyolite to 233.4 +/- 1.1 Ma. Deposition of the locally ~200 m thick crystal tuff unit follows an unconformity and marks the onset of intense, mainly mafic to intermediate, calc-alkaline magmatic activity. Triassic volcanic activity in the North Pamir was coeval with the major phase of Cimmerian intrusive activity in the Karakul-Mazar arc-accretionary complex to the south, caused by northward subduction of the Paleo-Tethys. It also coincided with the emplacement of basanitic and carbonatitic dikes and a thermal event in the South Tien Shan, to the north of our study area. Evidence for arc-related magmatic activity in a back-arc position provides strong arguments for back-arc extension or transtension and basin formation. This puts the Qimgan succession in line with a more than 1000 km long realm of extensional Triassic back-arc basins known from the North Pamir in the Kyrgyz Altyn Darya valley (Myntekin formation), the North Pamir of Tajikistan and Afghanistan, and the Afghan Hindukush (Doab formation) and further west from the Paropamisus and Kopet Dag (Aghdarband, NE Iran).
Soziale Medien sind ein wesentlicher Bestandteil des Alltags von Schüler*innen und gleichzeitig zunehmend wichtig in Wirtschaft, Politik und Wissenschaft. Am Beispiel von Twitter zeigt dieser Beitrag, dass soziale Medien im Unterricht auch für die Beantwortung geographischer Fragestellungen verwendet werden können. Hierfür eignen sich Twitter-Daten aufgrund ihrer Georeferenzierung und weiterer interessanter Inhalte besonders. Der Beitrag gibt einen Überblick über die Verwendung von Twitter für sozialwissenschaftliche und humangeographische Fragestellungen und reflektiert die Nutzung von Twitter im Unterricht. Für die Unterrichtspraxis werden Beispiele zu den Themen Braunkohle, Flutereignisse und Raumwahrnehmungen sowie Anleitungen zur Auswertung, Anwendung und Reflexion von Twitter-Analysen vorgestellt.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
River-valley morphology preserves information on tectonic and climatic conditions that shape landscapes. Observations suggest that river discharge and valley-wall lithology are the main controls on valley width. Yet, current models based on these observations fail to explain the full range of cross-sectional valley shapes in nature, suggesting hitherto unquantified controls on valley width. In particular, current models cannot explain the existence of paired terrace sequences that form under cyclic climate forcing. Paired river terraces are staircases of abandoned floodplains on both valley sides, and hence preserve past valley widths. Their formation requires alternating phases of predominantly river incision and predominantly lateral planation, plus progressive valley narrowing. While cyclic Quaternary climate changes can explain shifts between incision and lateral erosion, the driving mechanism of valley narrowing is unknown. Here, we extract valley geometries from climatically formed, alluvial river-terrace sequences and show that across our dataset, the total cumulative terrace height (here: total valley height) explains 90%–99% of the variance in valley width at the terrace sites. This finding suggests that valley height, or a parameter that scales linearly with valley height, controls valley width in addition to river discharge and lithology. To explain this valley-width-height relationship, we reformulate existing valley-width models and suggest that, when adjusting to new boundary conditions, alluvial valleys evolve to a width at which sediment removal from valley walls matches lateral sediment supply from hillslope erosion. Such a hillslope-channel coupling is not captured in current valley-evolution models. Our model can explain the existence of paired terrace sequences under cyclic climate forcing and relates valley width to measurable field parameters. Therefore, it facilitates the reconstruction of past climatic and tectonic conditions from valley topography.
Introducing PebbleCounts
(2019)
Grain-size distributions are a key geomorphic metric of gravel-bed rivers. Traditional measurement methods include manual counting or photo sieving, but these are achievable only at the 1–10 ㎡ scale. With the advent of drones and increasingly high-resolution cameras, we can now generate orthoimagery over hectares at millimeter to centimeter resolution. These scales, along with the complexity of high-mountain rivers, necessitate different approaches for photo sieving. As opposed to other image segmentation methods that use a watershed approach, our open-source algorithm, PebbleCounts, relies on k-means clustering in the spatial and spectral domain and rapid manual selection of well-delineated grains. This improves grain-size estimates for complex riverbed imagery, without post-processing. We also develop a fully automated method, PebbleCountsAuto, that relies on edge detection and filtering suspect grains, without the k-means clustering or manual selection steps. The algorithms are tested in controlled indoor conditions on three arrays of pebbles and then applied to 12 × 1 ㎡ orthomosaic clips of high-energy mountain rivers collected with a camera-on-mast setup (akin to a low-flying drone). A 20-pixel b-axis length lower truncation is necessary for attaining accurate grain-size distributions. For the k-means PebbleCounts approach, average percentile bias and precision are 0.03 and 0.09 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, and 0.07 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. The automatic approach has higher bias and precision of 0.13 and 0.15 ψ, respectively, for ∼1.16 mm pixel⁻¹ images, but similar values of −0.06 and 0.05 ψ for one 0.32 mm pixel⁻¹ image. For the automatic approach, only at best 70 % of the grains are correct identifications, and typically around 50 %. PebbleCounts operates most effectively at the 1 ㎡ patch scale, where it can be applied in ∼5–10 min on many patches to acquire accurate grain-size data over 10–100 ㎡ areas. These data can be used to validate PebbleCountsAuto, which may be applied at the scale of entire survey sites (102–104 ㎡ ). We synthesize results and recommend best practices for image collection, orthomosaic generation, and grain-size measurement using both algorithms.
The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all.
Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation.
In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.
Mapping Damage-Affected Areas after Natural Hazard Events Using Sentinel-1 Coherence Time Series
(2018)
The emergence of the Sentinel-1A and 1B satellites now offers freely available and widely accessible Synthetic Aperture Radar (SAR) data. Near-global coverage and rapid repeat time (6–12 days) gives Sentinel-1 data the potential to be widely used for monitoring the Earth’s surface. Subtle land-cover and land surface changes can affect the phase and amplitude of the C-band SAR signal, and thus the coherence between two images collected before and after such changes. Analysis of SAR coherence therefore serves as a rapidly deployable and powerful tool to track both seasonal changes and rapid surface disturbances following natural disasters. An advantage of using Sentinel-1 C-band radar data is the ability to easily construct time series of coherence for a region of interest at low cost. In this paper, we propose a new method for Potentially Affected Area (PAA) detection following a natural hazard event. Based on the coherence time series, the proposed method (1) determines the natural variability of coherence within each pixel in the region of interest, accounting for factors such as seasonality and the inherent noise of variable surfaces; and (2) compares pixel-by-pixel syn-event coherence to temporal coherence distributions to determine where statistically significant coherence loss has occurred. The user can determine to what degree the syn-event coherence value (e.g., 1st, 5th percentile of pre-event distribution) constitutes a PAA, and integrate pertinent regional data, such as population density, to rank and prioritise PAAs. We apply the method to two case studies, Sarpol-e, Iran following the 2017 Iran-Iraq earthquake, and a landslide-prone region of NW Argentina, to demonstrate how rapid identification and interpretation of potentially affected areas can be performed shortly following a natural hazard event.
In the arctic and high mountains it is common to measure vertical changes of ice sheets and glaciers via digital elevation model (DEM) differencing. This requires the signal of change to outweigh the noise associated with the datasets. Excluding large landslides, on the ice-free earth the land-level change is smaller in vertical magnitude and thus requires more accurate DEMs for differencing and identification of change. Previously, this has required meter to submeter data at small spatial scales. Following careful corrections, we are able to measure land-level changes in gravel-bed channels and steep hillslopes in the south-central Andes using the SRTM-C (collected in 2000) and the TanDEM-X (collected from 2010 to 2015) near-global 12–30m DEMs. Long-standing errors in the SRTM-C are corrected using the TanDEM-X as a control surface and applying cosine-fit co-registration to remove ∼ 1∕10 pixel (∼ 3m) shifts, fast Fourier transform (FFT) and filtering to remove SRTM-C short- and long-wavelength stripes, and blocked shifting to remove remaining complex biases. The datasets are then differenced and outlier pixels are identified as a potential signal for the case of gravel-bed channels and hillslopes. We are able to identify signals of incision and aggradation (with magnitudes down to ∼ 3m in the best case) in two > 100km river reaches, with increased geomorphic activity downstream of knickpoints. Anthropogenic gravel excavation and piling is prominently measured, with magnitudes exceeding ±5m (up to > 10m for large piles). These values correspond to conservative average rates of 0.2 to > 0.5myr−1 for vertical changes in gravel-bed rivers. For hillslopes, since we require stricter cutoffs for noise, we are only able to identify one major landslide in the study area with a deposit volume of 16±0.15×106m3. Additional signals of change can be garnered from TanDEM-X auxiliary layers; however, these are more difficult to quantify. The methods presented can be extended to any region of the world with SRTM-C and TanDEM-X coverage where vertical land-level changes are of interest, with the caveat that remaining vertical uncertainties in primarily the SRTM-C limit detection in steep and complex topography.
The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) with its land and vegetation height data product (ATL08), and Global Ecosystem Dynamics Investigation (GEDI) with its terrain elevation and height metrics data product (GEDI Level 2A) missions have great potential to globally map ground and canopy heights. Canopy height is a key factor in estimating above-ground biomass and its seasonal changes; these satellite missions can also improve estimated above-ground carbon stocks. This study presents a novel Sparse Vegetation Detection Algorithm (SVDA) which uses ICESat-2 (ATL03, geolocated photons) data to map tree and vegetation heights in a sparsely vegetated savanna ecosystem. The SVDA consists of three main steps: First, noise photons are filtered using the signal confidence flag from ATL03 data and local point statistics. Second, we classify ground photons based on photon height percentiles. Third, tree and grass photons are classified based on the number of neighbors. We validated tree heights with field measurements (n = 55), finding a root-mean-square error (RMSE) of 1.82 m using SVDA, GEDI Level 2A (Geolocated Elevation and Height Metrics product): 1.33 m, and ATL08: 5.59 m. Our results indicate that the SVDA is effective in identifying canopy photons in savanna ecosystems, where ATL08 performs poorly. We further identify seasonal vegetation height changes with an emphasis on vegetation below 3 m; widespread height changes in this class from two wet-dry cycles show maximum seasonal changes of 1 m, possibly related to seasonal grass-height differences. Our study shows the difficulties of vegetation measurements in savanna ecosystems but provides the first estimates of seasonal biomass changes.
The intensification of Northern Hemisphere glaciations at the end of the Pliocene epoch marks one of the most substantial climatic shifts of the Cenozoic. Despite global cooling, sea surface temperatures in the high latitude North Atlantic Ocean rose between 2.9–2.7 million years ago. Here we present sedimentary geochemical proxy data from the Gulf of Cadiz to reconstruct the variability of Mediterranean Outflow Water, an important heat source to the North Atlantic. We find evidence for enhanced production of Mediterranean Outflow from the mid-Pliocene to the late Pliocene which we infer could have driven a sub-surface heat channel into the high-latitude North Atlantic. We then use Earth System Models to constrain the impact of enhanced Mediterranean Outflow production on the northward heat transport in the North Atlantic. In accord with the proxy data, the numerical model results support the formation of a sub-surface channel that pumped heat from the subtropics into the high latitude North Atlantic. We further suggest that this mechanism could have delayed ice sheet growth at the end of the Pliocene.
Eight d-metal-containing N-butylpyridinium ionic liquids (ILs) with the nominal composition (C4Py)2[Ni0.5M0.5Cl4] or (C4Py)2[Zn0.5M0.5Cl4] (M = Cu, Co, Mn, Ni, Zn; C4Py = N-butylpyridinium) were synthesized, characterized, and investigated for their optical properties. Single crystal and powder X-ray analysis shows that the compounds are isostructural to existing examples based on other d-metal ions. Inductively coupled plasma optical emission spectroscopy measurements confirm that the metal/metal ratio is around 50 : 50. UV-Vis spectroscopy shows that the optical absorption can be tuned by selection of the constituent metals. Moreover, the compounds can act as an optical sensor for the detection of gases such as ammonia as demonstrated via a simple prototype setup.
Permafrost is warming in the northern high latitudes, inducing highly dynamic thaw-related permafrost disturbances across the terrestrial Arctic. Monitoring and tracking of permafrost disturbances is important as they impact surrounding landscapes, ecosystems and infrastructure. Remote sensing provides the means to detect, map, and quantify these changes homogeneously across large regions and time scales. Existing Landsat-based algorithms assess different types of disturbances with similar spatiotemporal requirements. However, Landsat-based analyses are restricted in northern high latitudes due to the long repeat interval and frequent clouds, in particular at Arctic coastal sites. We therefore propose to combine Landsat and Sentinel-2 data for enhanced data coverage and present a combined annual mosaic workflow, expanding currently available algorithms, such as LandTrendr, to achieve more reliable time series analysis. We exemplary test the workflow for twelve sites across the northern high latitudes in Siberia. We assessed the number of images and cloud-free pixels, the spatial mosaic coverage and the mosaic quality with spectral comparisons. The number of available images increased steadily from 1999 to 2019 but especially from 2016 onward with the addition of Sentinel-2 images. Consequently, we have an increased number of cloud-free pixels even under challenging environmental conditions, which then serve as the input to the mosaicking process. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas (99.9–100 %), while Landsat-only mosaics contained data-gaps in the same years, only reaching coverage percentages of 27.2 %, 58.1 %, and 69.7 % for Sobo Sise, East Taymyr, and Kurungnakh in 2017, respectively. The spectral comparison of Landsat image, Sentinel-2 image, and Landsat+Sentinel-2 mosaic showed high correlation between the input images and mosaic bands (e.g., for Kurungnakh 0.91–0.97 between Landsat and Landsat+Sentinel-2 mosaic and 0.92–0.98 between Sentinel-2 and Landsat+Sentinel-2 mosaic) across all twelve study sites, testifying good quality mosaic results. Our results show that especially the results for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining Landsat and Sentinel-2 data we accomplished to create reliably high spatial resolution input mosaics for time series analyses. Our approach allows to apply a high temporal continuous time series analysis to northern high latitude permafrost regions for the first time, overcoming substantial data gaps, and assess permafrost disturbance dynamics on an annual scale across large regions with algorithms such as LandTrendr by deriving the location, timing and progression of permafrost thaw disturbances
A new solid-state material, N-butyl pyridinium diiodido argentate(I), is synthesized using a simple and effective one-pot approach. In the solid state, the compound exhibits 1D ([AgI2](-))(n) chains that are stabilized by the N-butyl pyridinium cation. The 1D structure is further manifested by the formation of long, needle-like crystals, as revealed from electron microscopy. As the general composition is derived from metal halide-based ionic liquids, the compound has a low melting point of 100-101 degrees C, as confirmed by differential scanning calorimetry. Most importantly, the compound has a conductivity of 10(-6) S cm(-1) at room temperature. At higher temperatures the conductivity increases and reaches to 10(-4 )S cm(-1) at 70 degrees C. In contrast to AgI, however, the current material has a highly anisotropic 1D arrangement of the ionic domains. This provides direct and tuneable access to fast and anisotropic ionic conduction. The material is thus a significant step forward beyond current ion conductors and a highly promising prototype for the rational design of highly conductive ionic solid-state conductors for battery or solar cell applications.
Uplift in the broken Andean foreland of the Argentine Santa Bárbara System (SBS) is associated with the contractional reactivation of basement anisotropies, similar to those reported from the thick-skinned Cretaceous-Eocene Laramide province of North America. Fault scarps, deformed Quaternary deposits and landforms, disrupted drainage patterns, and medium-sized earthquakes within the SBS suggest that movement along these structures may be a recurring phenomenon, with yet to be defined repeat intervals and rupture lengths. In contrast to the Subandes thrust belt farther north, where eastward-migrating deformation has generated a well-defined thrust front, the SBS records spatiotemporally disparate deformation along structures that are only known to the first order. We present herein the results of geomorphic desktop analyses, structural field observations, and 2D electrical resistivity tomography and seismic-refraction tomography surveys and an interpretation of seismic reflection profiles across suspected fault scarps in the sedimentary basins adjacent to the Candelaria Range (CR) basement uplift, in the south-central part of the SBS. Our analysis in the CR piedmont areas reveals consistency between the results of near-surface electrical resistivity and seismic-refraction tomography surveys, the locations of prominent fault scarps, and structural geometries at greater depth imaged by seismic reflection data. We suggest that this deformation is driven by deep-seated blind thrusting beneath the CR and associated regional warping, while shortening involving Mesozoic and Cenozoic sedimentary strata in the adjacent basins was accommodated by layer-parallel folding and flexural-slip faults that cut through Quaternary landforms and deposits at the surface.
Sea surface temperature (SST) patterns can – as surface climate forcing – affect weather and climate at large distances. One example is El Niño-Southern Oscillation (ENSO) that causes climate anomalies around the globe via teleconnections. Although several studies identified and characterized these teleconnections, our understanding of climate processes remains incomplete, since interactions and feedbacks are typically exhibited at unique or multiple temporal and spatial scales. This study characterizes the interactions between the cells of a global SST data set at different temporal and spatial scales using climate networks. These networks are constructed using wavelet multi-scale correlation that investigate the correlation between the SST time series at a range of scales allowing instantaneously deeper insights into the correlation patterns compared to traditional methods like empirical orthogonal functions or classical correlation analysis. This allows us to identify and visualise regions of – at a certain timescale – similarly evolving SSTs and distinguish them from those with long-range teleconnections to other ocean regions. Our findings re-confirm accepted knowledge about known highly linked SST patterns like ENSO and the Pacific Decadal Oscillation, but also suggest new insights into the characteristics and origins of long-range teleconnections like the connection between ENSO and Indian Ocean Dipole.
Records from ocean bottom seismometers (OBSs) are highly contaminated by noise, which is much stronger compared to data from most land stations, especially on the horizontal components. As a consequence, the high energy of the oceanic noise at frequencies below 1 Hz considerably complicates the analysis of the teleseismic earthquake signals recorded by OBSs.
Previous studies suggested different approaches to remove low-frequency noises from OBS recordings but mainly focused on the vertical component. The records of horizontal components, which are crucial for the application of many methods in passive seismological analysis of body and surface waves, could not be much improved in the teleseismic frequency band. Here we introduce a noise reduction method, which is derived from the harmonic–percussive separation algorithms used in Zali et al. (2021), in order to separate long-lasting narrowband signals from broadband transients in the OBS signal. This leads to significant noise reduction of OBS records on both the vertical and horizontal components and increases the earthquake signal-to-noise ratio (SNR) without distortion of the broadband earthquake waveforms. This is demonstrated through tests with synthetic data. Both SNR and cross-correlation coefficients showed significant improvements for different realistic noise realizations. The application of denoised signals in surface wave analysis and receiver functions is discussed through tests with synthetic and real data.
Hydrometric networks play a vital role in providing information for decision-making in water resource management. They should be set up optimally to provide as much information as possible that is as accurate as possible and, at the same time, be cost-effective. Although the design of hydrometric networks is a well-identified problem in hydrometeorology and has received considerable attention, there is still scope for further advancement. In this study, we use complex network analysis, defined as a collection of nodes interconnected by links, to propose a new measure that identifies critical nodes of station networks. The approach can support the design and redesign of hydrometric station networks. The science of complex networks is a relatively young field and has gained significant momentum over the last few years in different areas such as brain networks, social networks, technological networks, or climate networks. The identification of influential nodes in complex networks is an important field of research. We propose a new node-ranking measure – the weighted degree–betweenness (WDB) measure – to evaluate the importance of nodes in a network. It is compared to previously proposed measures used on synthetic sample networks and then applied to a real-world rain gauge network comprising 1229 stations across Germany to demonstrate its applicability. The proposed measure is evaluated using the decline rate of the network efficiency and the kriging error. The results suggest that WDB effectively quantifies the importance of rain gauges, although the benefits of the method need to be investigated in more detail.
Rotational motions play a key role in measuring seismic wavefield properties. Using newly developed portable rotational instruments, it is now possible to directly measure rotational motions in a broad frequency range. Here, we investigated the instrumental self-noise and data quality in a huddle test in Fürstenfeldbruck, Germany, in August 2019. We compare the data from six rotational and three translational sensors. We studied the recorded signals using correlation, coherence analysis, and probabilistic power spectral densities. We sorted the coherent noise into five groups with respect to the similarities in frequency content and shape of the signals. These coherent noises were most likely caused by electrical devices, the dehumidifier system in the building, humans, and natural sources such as wind. We calculated self-noise levels through probabilistic power spectral densities and by applying the Sleeman method, a three-sensor method. Our results from both methods indicate that self-noise levels are stable between 0.5 and 40 Hz. Furthermore, we recorded the 29 August 2019 ML 3.4 Dettingen earthquake. The calculated source directions are found to be realistic for all sensors in comparison to the real back azimuth. We conclude that the five tested blueSeis-3A rotational sensors, when compared with respect to coherent noise, self-noise, and source direction, provide reliable and consistent results. Hence, field experiments with single rotational sensors can be undertaken.
We measure valence-to-core x-ray emission spectra of compressed crystalline GeO₂ up to 56 GPa and of amorphous GeO₂ up to 100 GPa. In a novel approach, we extract the Ge coordination number and mean Ge-O distances from the emission energy and the intensity of the Kβ'' emission line. The spectra of high-pressure polymorphs are calculated using the Bethe-Salpeter equation. Trends observed in the experimental and calculated spectra are found to match only when utilizing an octahedral model. The results reveal persistent octahedral Ge coordination with increasing distortion, similar to the compaction mechanism in the sequence of octahedrally coordinated crystalline GeO₂ high-pressure polymorphs.
RainNet v1.0
(2020)
In this study, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. Its design was inspired by the U-Net and SegNet families of deep learning models, which were originally designed for binary segmentation tasks. RainNet was trained to predict continuous precipitation intensities at a lead time of 5min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900km × 900km and has a resolution of 1km in space and 5min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In order to achieve a lead time of 1h, a recursive approach was implemented by using RainNet predictions at 5min lead times as model inputs for longer lead times. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the rainymotion library and had previously been shown to outperform DWD's operational nowcasting model for the same set of verification events.
RainNet significantly outperforms the benchmark models at all lead times up to 60min for the routine verification metrics mean absolute error (MAE) and the critical success index (CSI) at intensity thresholds of 0.125, 1, and 5mm h⁻¹. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15mm h⁻¹). The limited ability of RainNet to predict heavy rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16km and below. Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance in terms of a binary segmentation task. Furthermore, we suggest additional input data that could help to better identify situations with imminent precipitation dynamics. The model code, pretrained weights, and training data are provided in open repositories as an input for such future studies.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
(2017)
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km−2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79–0.85. Testing the method for a larger area of 226.3 km−2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
Many widely used observational data sets are comprised of several overlapping instrument records. While data inter-calibration techniques often yield continuous and reliable data for trend analysis, less attention is generally paid to maintaining higher-order statistics such as variance and autocorrelation. A growing body of work uses these metrics to quantify the stability or resilience of a system under study and potentially to anticipate an approaching critical transition in the system. Exploring the degree to which changes in resilience indicators such as the variance or autocorrelation can be attributed to non-stationary characteristics of the measurement process – rather than actual changes in the dynamical properties of the system – is important in this context. In this work we use both synthetic and empirical data to explore how changes in the noise structure of a data set are propagated into the commonly used resilience metrics lag-one autocorrelation and variance. We focus on examples from remotely sensed vegetation indicators such as vegetation optical depth and the normalized difference vegetation index from different satellite sources. We find that time series resulting from mixing signals from sensors with varied uncertainties and covering overlapping time spans can lead to biases in inferred resilience changes. These biases are typically more pronounced when resilience metrics are aggregated (for example, by land-cover type or region), whereas estimates for individual time series remain reliable at reasonable sensor signal-to-noise ratios. Our work provides guidelines for the treatment and aggregation of multi-instrument data in studies of critical transitions and resilience.
Models for the predictions of monetary losses from floods mainly blend data deemed to represent a single flood type and region. Moreover, these approaches largely ignore indicators of preparedness and how predictors may vary between regions and events, challenging the transferability of flood loss models. We use a flood loss database of 1812 German flood-affected households to explore how Bayesian multilevel models can estimate normalised flood damage stratified by event, region, or flood process type. Multilevel models acknowledge natural groups in the data and allow each group to learn from others. We obtain posterior estimates that differ between flood types, with credibly varying influences of water depth, contamination, duration, implementation of property-level precautionary measures, insurance, and previous flood experience; these influences overlap across most events or regions, however. We infer that the underlying damaging processes of distinct flood types deserve further attention. Each reported flood loss and affected region involved mixed flood types, likely explaining the uncertainty in the coefficients. Our results emphasise the need to consider flood types as an important step towards applying flood loss models elsewhere. We argue that failing to do so may unduly generalise the model and systematically bias loss estimations from empirical data.
Widespread flooding in June 2013 caused damage costs of €6 to 8 billion in Germany, and awoke many memories of the floods in August 2002, which resulted in total damage of €11.6 billion and hence was the most expensive natural hazard event in Germany up to now. The event of 2002 does, however, also mark a reorientation toward an integrated flood risk management system in Germany. Therefore, the flood of 2013 offered the opportunity to review how the measures that politics, administration, and civil society have implemented since 2002 helped to cope with the flood and what still needs to be done to achieve effective and more integrated flood risk management. The review highlights considerable improvements on many levels, in particular (1) an increased consideration of flood hazards in spatial planning and urban development, (2) comprehensive property-level mitigation and preparedness measures, (3) more effective flood warnings and improved coordination of disaster response, and (4) a more targeted maintenance of flood defense systems. In 2013, this led to more effective flood management and to a reduction of damage. Nevertheless, important aspects remain unclear and need to be clarified. This particularly holds for balanced and coordinated strategies for reducing and overcoming the impacts of flooding in large catchments, cross-border and interdisciplinary cooperation, the role of the general public in the different phases of flood risk management, as well as a transparent risk transfer system. Recurring flood events reveal that flood risk management is a continuous task. Hence, risk drivers, such as climate change, land-use changes, economic developments, or demographic change and the resultant risks must be investigated at regular intervals, and risk reduction strategies and processes must be reassessed as well as adapted and implemented in a dialogue with all stakeholders.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
The semiarid northeast of Brazil is one of the most densely populated dryland regions in the world and recurrently affected by severe droughts. Thus, reliable seasonal forecasts of streamflow and reservoir storage are of high value for water managers. Such forecasts can be generated by applying either hydrological models representing underlying processes or statistical relationships exploiting correlations among meteorological and hydrological variables. This work evaluates and compares the performances of seasonal reservoir storage forecasts derived by a process-based hydrological model and a statistical approach.
Driven by observations, both models achieve similar simulation accuracies. In a hindcast experiment, however, the accuracy of estimating regional reservoir storages was considerably lower using the process-based hydrological model, whereas the resolution and reliability of drought event predictions were similar by both approaches. Further investigations regarding the deficiencies of the process-based model revealed a significant influence of antecedent wetness conditions and a higher sensitivity of model prediction performance to rainfall forecast quality.
Within the scope of this study, the statistical model proved to be the more straightforward approach for predictions of reservoir level and drought events at regionally and monthly aggregated scales. However, for forecasts at finer scales of space and time or for the investigation of underlying processes, the costly initialisation and application of a process-based model can be worthwhile. Furthermore, the application of innovative data products, such as remote sensing data, and operational model correction methods, like data assimilation, may allow for an enhanced exploitation of the advanced capabilities of process-based hydrological models.
Cosmic-ray neutron sensing (CRNS) has become an effective method to measure soil moisture at a horizontal scale of hundreds of metres and a depth of decimetres. Recent studies proposed operating CRNS in a network with overlapping footprints in order to cover root-zone water dynamics at the small catchment scale and, at the same time, to represent spatial heterogeneity. In a joint field campaign from September to November 2020 (JFC-2020), five German research institutions deployed 15 CRNS sensors in the 0.4 km2 Wüstebach catchment (Eifel mountains, Germany). The catchment is dominantly forested (but includes a substantial fraction of open vegetation) and features a topographically distinct catchment boundary. In addition to the dense CRNS coverage, the campaign featured a unique combination of additional instruments and techniques: hydro-gravimetry (to detect water storage dynamics also below the root zone); ground-based and, for the first time, airborne CRNS roving; an extensive wireless soil sensor network, supplemented by manual measurements; and six weighable lysimeters. Together with comprehensive data from the long-term local research infrastructure, the published data set (available at https://doi.org/10.23728/b2share.756ca0485800474e9dc7f5949c63b872; Heistermann et al., 2022) will be a valuable asset in various research contexts: to advance the retrieval of landscape water storage from CRNS, wireless soil sensor networks, or hydrogravimetry; to identify scale-specific combinations of sensors and methods to represent soil moisture variability; to improve the understanding and simulation of land–atmosphere exchange as well as hydrological and hydrogeological processes at the hillslope and the catchment scale; and to support the retrieval of soil water content from airborne and spaceborne remote sensing platforms.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.
The Arctic is greatly impacted by climate change. The increase in air temperature drives the thawing of permafrost and an increase in coastal erosion and river discharge. This leads to a greater input of sediment and organic matter into coastal waters, which substantially impacts the ecosystems by reducing light transmission through the water column and altering the biogeochemistry, but also the subsistence economy of local people, and changes in climate because of the transformation of organic matter into greenhouse gases. Yet, the quantification of suspended sediment in Arctic coastal and nearshore waters remains unsatisfactory due to the absence of dedicated algorithms to resolve the high loads occurring in the close vicinity of the shoreline. In this study we present the Arctic Nearshore Turbidity Algorithm (ANTA), the first reflectance-turbidity relationship specifically targeted towards Arctic nearshore waters that is tuned with in-situ measurements from the nearshore waters of Herschel Island Qikiqtaruk in the western Canadian Arctic. A semi-empirical model was calibrated for several relevant sensors in ocean color remote sensing, including MODIS, Sentinel 3 (OLCI), Landsat 8 (OLI), and Sentinel 2 (MSI), as well as the older Landsat sensors TM and ETM+. The ANTA performed better with Landsat 8 than with Sentinel 2 and Sentinel 3. The application of the ANTA to Sentinel 2 imagery that matches in-situ turbidity samples taken in Adventfjorden, Svalbard, shows transferability to nearshore areas beyond Herschel Island Qikiqtaruk.
Sedimentary ancient DNA has been proposed as a key methodology for reconstructing biodiversity over time. Yet, despite the concentration of Earth’s biodiversity in the tropics, this method has rarely been applied in this region. Moreover, the taphonomy of sedimentary DNA, especially in tropical environments, is poorly understood. This study elucidates challenges and opportunities of sedimentary ancient DNA approaches for reconstructing tropical biodiversity. We present shotgun-sequenced metagenomic profiles and DNA degradation patterns from multiple sediment cores from Mubwindi Swamp, located in Bwindi Impenetrable Forest (Uganda), one of the most diverse forests in Africa. We describe the taxonomic composition of the sediments covering the past 2200 years and compare the sedimentary DNA data with a comprehensive set of environmental and sedimentological parameters to unravel the conditions of DNA degradation. Consistent with the preservation of authentic ancient DNA in tropical swamp sediments, DNA concentration and mean fragment length declined exponentially with age and depth, while terminal deamination increased with age. DNA preservation patterns cannot be explained by any environmental parameter alone, but age seems to be the primary driver of DNA degradation in the swamp. Besides degradation, the presence of living microbial communities in the sediment also affects DNA quantity. Critically, 92.3% of our metagenomic data of a total 81.8 million unique, merged reads cannot be taxonomically identified due to the absence of genomic references in public databases. Of the remaining 7.7%, most of the data (93.0%) derive from Bacteria and Archaea, whereas only 0–5.8% are from Metazoa and 0–6.9% from Viridiplantae, in part due to unbalanced taxa representation in the reference data. The plant DNA record at ordinal level agrees well with local pollen data but resolves less diversity. Our animal DNA record reveals the presence of 41 native taxa (16 orders) including Afrotheria, Carnivora, and Ruminantia at Bwindi during the past 2200 years. Overall, we observe no decline in taxonomic richness with increasing age suggesting that several-thousand-year-old information on past biodiversity can be retrieved from tropical sediments. However, comprehensive genomic surveys of tropical biota need prioritization for sedimentary DNA to be a viable methodology for future tropical biodiversity studies.
The first step towards assessing hazards in seismically active regions involves mapping capable faults and estimating their recurrence times. While the mapping of active faults is commonly based on distinct geologic and geomorphic features evident at the surface, mapping blind seismogenic faults is complicated by the absence of on-fault diagnostic features. Here we investigated the Pichilemu Fault in coastal Chile, unknown until it generated a Mw 7.0 earthquake in 2010. The lack of evident surface faulting suggests activity along a partly-hidden blind fault. We used off-fault deformed marine terraces to estimate a fault-slip rate of 0.52 ± 0.04 m/ka, which, when integrated with satellite geodesy suggests a 2.12 ± 0.2 ka recurrence time for Mw~7.0 normal-faulting earthquakes. We propose that extension in the Pichilemu region is associated with stress changes during megathrust earthquakes and accommodated by sporadic slip during upper-plate earthquakes, which has implications for assessing the seismic potential of cryptic faults along convergent margins and elsewhere.
When dealing with issues that are of high societal relevance, Earth sciences still face a lack of acceptance, which is partly rooted in insufficient communication strategies on the individual and local community level. To increase the efficiency of communication routines, science has to transform its outreach concepts to become more aware of individual needs and demands. The “encoding/decoding” concept as well as critical intercultural communication studies can offer pivotal approaches for this transformation.
Sediment archives in the terrestrial and marine realm are regularly analyzed to infer changes in climate, tectonic, or anthropogenic boundary conditions of the past. However, contradictory observations have been made regarding whether short period events are faithfully preserved in stratigraphic archives; for instance, in marine sediments offshore large river systems. On the one hand, short period events are hypothesized to be non-detectable in the signature of terrestrially derived sediments due to buffering during sediment transport along large river systems. On the other hand, several studies have detected signals of short period events in marine records offshore large river systems. We propose that this apparent discrepancy is related to the lack of a differentiation between different types of signals and the lack of distinction between river response times and signal propagation times. In this review, we (1) expand the definition of the term ‘signal’ and group signals in sub-categories related to hydraulic grain size characteristics, (2) clarify the different types of ‘times’ and suggest a precise and consistent terminology for future use, and (3) compile and discuss factors influencing the times of signal transfer along sediment routing systems and how those times vary with hydraulic grain size characteristics. Unraveling different types of signals and distinctive time periods related to signal propagation addresses the discrepancies mentioned above and allows a more comprehensive exploration of event preservation in stratigraphy – a prerequisite for reliable environmental reconstructions from terrestrially derived sedimentary records.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage.
The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
In this study, we validate and compare elevation accuracy and geomorphic metrics of satellite-derived digital elevation models (DEMs) on the southern Central Andean Plateau. The plateau has an average elevation of 3.7 km and is characterized by diverse topography and relief, lack of vegetation, and clear skies that create ideal conditions for remote sensing. At 30m resolution, SRTM-C, ASTER GDEM2, stacked ASTER L1A stereopair DEM, ALOS World 3D, and TanDEM-X have been analyzed. The higher-resolution datasets include 12m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X DEMs, and 5m ALOS World 3D. These DEMs are state of the art for optical (ASTER and ALOS) and radar (SRTM-C and TanDEM-X) spaceborne sensors. We assessed vertical accuracy by comparing standard deviations of the DEM elevation versus 307 509 differential GPS measurements across 4000m of elevation. For the 30m DEMs, the ASTER datasets had the highest vertical standard deviation at > 6.5 m, whereas the SRTM-C, ALOS World 3D, and TanDEM-X were all < 3.5 m. Higher-resolution DEMs generally had lower uncertainty, with both the 12m TanDEM-X and 5m ALOSWorld 3D having < 2m vertical standard deviation. Analysis of vertical uncertainty with respect to terrain elevation, slope, and aspect revealed the low uncertainty across these attributes for SRTM-C (30 m), TanDEM-X (12–30 m), and ALOS World 3D (5–30 m). Single-CoSSC TerraSAR-X/TanDEM-X 10m DEMs and the 30m ASTER GDEM2 displayed slight aspect biases, which were removed in their stacked counterparts (TanDEM-X and ASTER Stack). Based on low vertical standard deviations and visual inspection alongside optical satellite data, we selected the 30m SRTM-C, 12–30m TanDEM-X, 10m single-CoSSC TerraSAR-X/TanDEM-X, and 5m ALOS World 3D for geomorphic metric comparison in a 66 km2 catchment with a distinct river knickpoint. Consistent m=n values were found using chi plot channel profile analysis, regardless of DEM type and spatial resolution. Slope, curvature, and drainage area were calculated and plotting schemes were used to assess basin-wide differences in the hillslope-to-valley transition related to the knickpoint. While slope and hillslope length measurements vary little between datasets, curvature displays higher magnitude measurements with fining resolution. This is especially true for the optical 5m ALOS World 3D DEM, which demonstrated high-frequency noise in 2–8 pixel steps through a Fourier frequency analysis. The improvements in accurate space-radar DEMs (e.g., TanDEM-X) for geomorphometry are promising, but airborne or terrestrial data are still necessary for meter-scale analysis.