Refine
Year of publication
Document Type
- Article (37)
- Postprint (8)
- Habilitation Thesis (1)
- Review (1)
Language
- English (47)
Is part of the Bibliography
- yes (47)
Keywords
- hydropower (4)
- Graph theory (3)
- Himalayas (3)
- Nepal (3)
- glacial hazards (3)
- glacial lake outburst floods (3)
- water resources (3)
- Himalaya (2)
- Maritime Alps (2)
- WorldView-2 (2)
Beta diversity is a conceptual link between diversity at local and regional scales. Various additional methodologies of quantifying this and related phenomena have been applied. Among them, measures of pairwise (dis)similarity of sites are particularly popular. Undersampling, i.e. not recording all taxa present at a site, is a common situation in ecological data. Bias in many metrics related to beta diversity must be expected, but only few studies have explicitly investigated the properties of various measures under undersampling conditions. On the basis of an empirical data set, representing near-complete local inventories of the Lepidoptera from an isolated Pacific island, as well as simulated communities with varying properties, we mimicked different levels of undersampling. We used 14 different approaches to quantify beta diversity, among them dataset-wide multiplicative partitioning (i.e. true beta diversity') and pairwise site x site dissimilarities. We compared their values from incomplete samples to true results from the full data. We used these comparisons to quantify undersampling bias and we calculated correlations of the dissimilarity measures of undersampled data with complete data of sites. Almost all tested metrics showed bias and low correlations under moderate to severe undersampling conditions (as well as deteriorating precision, i.e. large chance effects on results). Measures that used only species incidence were very sensitive to undersampling, while abundance-based metrics with high dependency on the distribution of the most common taxa were particularly robust. Simulated data showed sensitivity of results to the abundance distribution, confirming that data sets of high evenness and/or the application of metrics that are strongly affected by rare species are particularly sensitive to undersampling. The class of beta measure to be used should depend on the research question being asked as different metrics can lead to quite different conclusions even without undersampling effects. For each class of metric, there is a trade-off between robustness to undersampling and sensitivity to rare species. In consequence, using incidence-based metrics carries a particular risk of false conclusions when undersampled data are involved. Developing bias corrections for such metrics would be desirable.
The efficiency of sediment routing from land to the ocean depends on the position of submarine canyon heads with regard to terrestrial sediment sources. We aim to identify the main controls on whether a submarine canyon head remains connected to terrestrial sediment input during Holocene sea-level rise. Globally, we identified 798 canyon heads that are currently located at the 120m-depth contour (the Last Glacial Maximum shoreline) and 183 canyon heads that are connected to the shore (within a distance of 6 km) during the present-day highstand. Regional hotspots of shore-connected canyons are the Mediterranean active margin and the Pacific coast of Central and South America. We used 34 terrestrial and marine predictor variables to predict shore-connected canyon occurrence using Bayesian regression. Our analysis shows that steep and narrow shelves facilitate canyon-head connectivity to the shore. Moreover, shore-connected canyons occur preferentially along active margins characterized by resistant bedrock and high river-water discharge.
Understanding how Earth-surface processes respond to past climatic perturbations is crucial for making informed predictions about future impacts of climate change on sediment "uxes. Sedimentary records provide the archives for inferring these processes, but their interpretation is compromised by our incomplete understanding of how sediment-routing systems respond to millennial-scale climate cycles. We analyzed seven sediment cores recovered from marine turbidite depositional sites along the Chile continental margin. The sites span a pronounced arid-to-humid gradient with variable relief and related sediment connectivity of terrestrial and marine environments. These sites allowed us to study event related depositional processes in different climatic and geomorphic settings from the Last Glacial Maximum to the present day. The three sites reveal a steep decline of turbidite deposition during deglaciation. High rates of sea-level rise postdate the decline in turbidite deposition. Comparison with paleoclimate proxies documents that the spatio-temporal sedimentary pattern rather mirrors the deglacial humidity decrease and concomitant warming with no resolvable lag times. Our results let us infer that declining deglacial humidity decreased "uvial sediment supply. This signal propagated rapidly through the highly connected systems into the marine sink in north-central Chile. In contrast, in south-central Chile, connectivity between the Andean erosional zone and the "uvial transfer zone probably decreased abruptly by sediment trapping in piedmont lakes related to deglaciation, resulting in a sudden decrease of sediment supply to the ocean. Additionally, reduced moisture supply may have contributed to the rapid decline of turbidite deposition. These different causes result in similar depositional patterns in the marine sinks. We conclude that turbiditic strata may constitute reliable recorders of climate change across a wide range of climatic zones and geomorphic conditions. However, the underlying causes for similar signal manifestations in the sinks may differ, ranging from maintained high system connectivity to abrupt connectivity loss. (C) 2017 Elsevier B.V. All rights reserved.
Mass wasting is an important process for denuding hillslopes and lowering ridge crests in active mountain belts such as the Himalaya-Karakoram ranges (HKR). Such a high-relief landscape is likely to be at its mechanical threshold, maintained by competing rapid rock uplift, river incision, and pervasive slope failure. We introduce excess topography, Z(E), for quantifying potentially unstable rock-mass volumes inclined at angles greater than a specified threshold angle. We find that Z(E) peaks along major fluvial and glacial inner gorges, which is also where the majority of 492 large (>0.1 km(2)) rock-slope failures occur in the Himalaya's largest cluster of documented Pleistocene to Holocene bedrock landslides. Our data reveal that bedrock landslides in the HKR chiefly detached from near or below the median elevation, whereas glaciers and rock glaciers occupy higher-elevation bands almost exclusively. Less than 10% of the area of the HKR is upslope of glaciers, such that possible censoring of evidence of large bedrock landslides above the permanent snow line barely affects this finding. Bedrock landslides appear to preferentially undermine topographic relief in response to fluvial and glacial incision along inner gorges, unless more frequent and smaller undetected failures, or rigorous (peri-)glacial erosion, compensate for this role at higher elevation. Either way, the distinct patterns of excess topography and large bedrock landsliding in the HKR juxtapose two stacked domains of landslide and (peri-)glacial erosion that may respond to different time scales of perturbation. Our findings call for more detailed analysis of vertical erosional domains and their geomorphic coupling in active mountain belts.
The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER), as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.
Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.
Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.
Modelling the transfer of supraglacial meltwater to the bed of Leverett Glacier, Southwest Greenland
(2015)
Meltwater delivered to the bed of the Greenland Ice Sheet is a driver of variable ice-motion through changes in effective pressure and enhanced basal lubrication. Ice surface velocities have been shown to respond rapidly both to meltwater production at the surface and to drainage of supraglacial lakes, suggesting efficient transfer of meltwater from the supraglacial to subglacial hydrological systems. Although considerable effort is currently being directed towards improved modelling of the controlling surface and basal processes, modelling the temporal and spatial evolution of the transfer of melt to the bed has received less attention. Here we present the results of spatially distributed modelling for prediction of moulins and lake drainages on the Leverett Glacier in Southwest Greenland. The model is run for the 2009 and 2010 ablation seasons, and for future increased melt scenarios. The temporal pattern of modelled lake drainages are qualitatively comparable with those documented from analyses of repeat satellite imagery. The modelled timings and locations of delivery of meltwater to the bed also match well with observed temporal and spatial patterns of ice surface speed-ups. This is particularly true for the lower catchment (< 1000 m a.s.l.) where both the model and observations indicate that the development of moulins is the main mechanism for the transfer of surface meltwater to the bed. At higher elevations (e.g. 1250-1500 m a.s.l.) the development and drainage of supraglacial lakes becomes increasingly important. At these higher elevations, the delay between modelled melt generation and subsequent delivery of melt to the bed matches the observed delay between the peak air temperatures and subsequent velocity speed-ups, while the instantaneous transfer of melt to the bed in a control simulation does not. Although both moulins and lake drainages are predicted to increase in number for future warmer climate scenarios, the lake drainages play an increasingly important role in both expanding the area over which melt accesses the bed and in enabling a greater proportion of surface melt to reach the bed.
Modelling the transfer of supraglacial meltwater to the bed of Leverett Glacier, Southwest Greenland
(2015)
Meltwater delivered to the bed of the Greenland Ice Sheet is a driver of variable ice-motion through changes in effective pressure and enhanced basal lubrication. Ice surface velocities have been shown to respond rapidly both to meltwater production at the surface and to drainage of supraglacial lakes, suggesting efficient transfer of meltwater from the supraglacial to subglacial hydrological systems. Although considerable effort is currently being directed towards improved modelling of the controlling surface and basal processes, modelling the temporal and spatial evolution of the transfer of melt to the bed has received less attention. Here we present the results of spatially distributed modelling for prediction of moulins and lake drainages on the Leverett Glacier in Southwest Greenland. The model is run for the 2009 and 2010 ablation seasons, and for future increased melt scenarios. The temporal pattern of modelled lake drainages are qualitatively comparable with those documented from analyses of repeat satellite imagery. The modelled timings and locations of delivery of meltwater to the bed also match well with observed temporal and spatial patterns of ice surface speed-ups. This is particularly true for the lower catchment (< 1000 m a.s.l.) where both the model and observations indicate that the development of moulins is the main mechanism for the transfer of surface meltwater to the bed. At higher elevations (e.g. 1250-1500 m a.s.l.) the development and drainage of supraglacial lakes becomes increasingly important. At these higher elevations, the delay between modelled melt generation and subsequent delivery of melt to the bed matches the observed delay between the peak air temperatures and subsequent velocity speed-ups, while the instantaneous transfer of melt to the bed in a control simulation does not. Although both moulins and lake drainages are predicted to increase in number for future warmer climate scenarios, the lake drainages play an increasingly important role in both expanding the area over which melt accesses the bed and in enabling a greater proportion of surface melt to reach the bed.
Climate science is highly interdisciplinary by nature, so understanding interactions between Earth processes inherently warrants the use of analytical software that can operate across the disciplines of Earth science. Toward this end, we present the Climate Data Toolbox for MATLAB, which contains more than 100 functions that span the major climate-related disciplines of Earth science. The toolbox enables streamlined, entirely scriptable workflows that are intuitive to write and easy to share. Included are functions to evaluate uncertainty, perform matrix operations, calculate climate indices, and generate common data displays. Documentation is presented pedagogically, with thorough explanations of how each function works and tutorials showing how the toolbox can be used to replicate results of published studies. As a well-tested, well-documented platform for interdisciplinary collaborations, the Climate Data Toolbox for MATLAB aims to reduce time spent writing low-level code, let researchers focus on physics rather than coding and encourage more efficacious code sharing. Plain Language Summary This article describes a collection of computer code that has recently been released to help scientists analyze many types of Earth science data. The code in this toolbox makes it easy to investigate things like global warming, El Nino, or other major climate-related processes such as how winds affect ocean circulation. Although the toolbox was designed to be used by expert climate scientists, its instruction manual is well written, and beginners may be able to learn a great deal about coding and Earth science, simply by following along with the provided examples. The toolbox is intended to help scientists save time, help them ensure their analysis is accurate, and make it easy for other scientists to repeat the results of previous studies.
Through their relevance for sediment budgets and the sensitivity of geomorphic systems, geomorphic coupling and (sediment) connectivity represent important topics in geomorphology. Since the introduction of the systems perspective to physical geography by Chorley and Kennedy (1971), a catchment has been perceived as consisting of landscape elements (e.g. landforms, subcatchments) that are coupled by geomorphic processes through sediment transport. In this study, we present a novel application of mathematical graph theory to explore the network structure of coarse sediment pathways in a central alpine catchment. Numerical simulation models for rockfall, debris flows, and (hillslope and channel) fluvial processes are used to establish a spatially explicit graph model of sediment sources, pathways and sinks. The raster cells of a digital elevation model form the nodes of this graph, and simulated sediment trajectories represent the corresponding edges. Model results are validated by visual comparison with the field situation and aerial photos. The interaction of sediment pathways, i.e. where the deposits of a geomorphic process form the sources of another process, forms sediment cascades, represented by paths (a succession of edges) in the graph model. We show how this graph can be used to explore upslope (contributing area) and downslope (source to sink) functional connectivity by analysing its nodes, edges and paths. The analysis of the spatial distribution, composition and frequency of sediment cascades yields information on the relative importance of geomorphic processes and their interaction (however regardless of their transport capacity). In the study area, the analysis stresses the importance of mass movements and their interaction, e.g. the linkage of large rockfall source areas to debris flows that potentially enter the channel network. Moreover, it is shown that only a small percentage of the study area is coupled to the channel network which itself is longitudinally disconnected by natural and anthropogenic barriers. Besides the case study, we discuss the methodological framework and alternatives for node and edge representations of graph models in geomorphology. We conclude that graph theory provides an excellent methodological framework for the analysis of geomorphic systems, especially for the exploration of quantitative approaches towards sediment connectivity.
Applications of graph theory have proliferated across the academic spectrum in recent years. Whereas geosciences and landscape ecology have made rich use of graph theory, its use seems limited in physical geography, and particularly in geomorphology. Common applications of graph theory analyses of connectivity, path or transport efficiencies, subnetworks, network structure, system behaviour and dynamics, and network optimization or engineering all have uses or potential uses in geomorphology and closely related fields. In this paper, we give a short introduction to graph theory and review previous geomorphological applications or works in related fields that have been particularly influential. Network-like geomorphic systems can be classified into nonspatial or spatially implicit system components linked by statistical/causal relationships and spatial units linked by some spatial relationship, for example by fluxes of matter and/or energy. We argue that, if geomorphic system properties and behaviour (e.g., complexity, sensitivity, synchronisability, historical contingency, connectivity etc.) depend on system structure and if graph theory is able to quantitatively describe the configuration of system components, then graph theory should provide us with tools that help in quantifying system properties and in inferring system behaviour. (C) 2015 Elsevier B.V. All rights reserved.
Purpose: Dry land vegetation is expected to respond sensitively to climate change and the projected variability of rainfall events. Rainfall as a water source is an obvious factor for the water supply of vegetation. However, the interaction of water and surface on rocky desert slopes with a patchy soil cover is also vital for vegetation in drylands. In particular, runoff on rocky surfaces and infiltration capacity of soil patches determine plant available water. Process-based studies into rock-soil interaction benefit from rainfall simulation, but require an approach accounting for the micro-scale heterogeneity of the slope surfaces. This study therefore aims at developing a suitable procedure for examining rock-soil interaction and the relevance of soil volume for storing plant available water in the northern Negev, Israel.
Materials and methods: To determine the amount of rainfall required to fill the available soil water storage capacity rainfall simulation experiments were conducted. The design of the rainfall-simulator and the selection of the plots aimed specifically at observing infiltration into small soil patches on a micro-scale relevant for the prevalent vegetation cover.
Results and discussion: The preliminary results of the study in the Negev Desert indicate that the ratio between soil volume and frequency of rainfall events determine the effect of climate change on plant available water and thus ultimately vegetation cover.
Conclusions: Based on the experiments examining runoff and soil moisture the qualitative understanding of hillslope ecohydrology in a rocky desert environment can be expanded into a quantitative assessment of the potential impact of varying rainfall conditions. The study also illustrates the contribution of rainfall simulation experiments for studies on the impact of climate change.
Biomass allometries and coarse root biomass distribution of mountain birch in southern Iceland
(2014)
Root systems are an important pool of biomass and carbon in forest ecosystems. However, most allometric studies on forest trees focus only on the aboveground components. When estimated, root biomass has most often been calculated by using a fixed conversion factor from aboveground biomass. In order to study the size-related development of the root system of native mountain birch (Betula pubescens Ehrh. ssp. czerepanovii), we collected the coarse root system of 25 different aged birch trees (stem diameter at 50 cm length between 0.2 and 14.1 cm) and characterized them by penetration depth (< 1 m) and root thickness. Based on this dataset, allometric functions for coarse roots (> 5 mm and > 2 mm), root stock, total belowground biomass and aboveground biomass components were calculated by a nonlinear and a linear fitting approach. The study showed that coarse root biomass of mountain birch was almost exclusively (> 95 weight-%) located in the top 30 cm, even in a natural old-growth woodland. By using a cross-validation approach, we found that the nonlinear fitting procedure performed better than the linear approach with respect to predictive power. In addition, our results underscore that general assumptions of fixed conversion factors lead to an underestimation of the belowground biomass. Thus, our results provide allometric functions for a more accurate root biomass estimation to be utilized in inventory reports and ecological studies.
Remote sensing technology serves as a powerful tool for analyzing geospatial characteristics of flood inundation events at various scales. However, the performance of remote sensing methods depends heavily on the flood characteristics and landscape settings. Difficulties might be encountered in mapping the extent of localized flooding with shallow water on riverine floodplain areas, where patches of herbaceous vegetation are interspersed with open water surfaces. To address the difficulties in mapping inundation on areas with complex water and vegetation compositions, a high spatial resolution dataset has to be used to reduce the problem of mixed pixels. The main objective of our study was to investigate the possibilities of using a single date WorldView-2 image of very high spatial resolution and supporting data to analyze spatial patterns of localized flooding on a riverine floodplain. We used a decision tree algorithm with various combinations of input variables including spectral bands of the WorldView-2 image, selected spectral indices dedicated to mapping water surfaces and vegetation, and topographic data. The overall accuracies of the twelve flood extent maps derived with the decision tree method and performed on both pixels and image objects ranged between 77% and 95%. The highest mapping overall accuracy was achieved with a method that utilized all available input data and the object-based image analysis. Our study demonstrates the possibility of using single date WorldView-2 data for analyzing flooding events at high spatial detail despite the absence of spectral bands from the short-waveform region that are frequently used in water related studies. Our study also highlights the importance of topographic data in inundation analyses. The greatest difficulties were met in mapping water surfaces under dense canopy herbaceous vegetation, due to limited water surface exposure and the dominance of vegetation reflectance.
Knowledge about the magnitude of localised flooding of riverine areas is crucial for appropriate land management and administration at regional and local levels. However, detection and delineation of localised flooding with remote sensing techniques are often hampered on floodplains by the presence of herbaceous vegetation. To address this problem, this study presents the application of full waveform airborne laser scanning (ALS) data for detection of floodwater extent. In general, water surfaces are characterised by low values of backscattered energy due to water absorption of the infrared laser shots, but the exact strength of the recorded laser pulse depends on the area covered by the targets located within a laser pulse footprint area. To account for this we analysed the physical quantity of radio metrically calibrated ALS data, the backscattering coefficient, in relation to water and vegetation coverage within a single laser footprint. The results showed that the backscatter was negatively correlated to water coverage, and that of the three distinguished classes of water coverage (low, medium, and high) only the class with the largest extent of water cover (>70%) had relatively distinct characteristics that can be used for classification of water surfaces. Following the laser footprint analysis, three classifiers, namely AdaBoost with Decision Tree, Naive Bayes and Random Forest, were utilised to classify laser points into flooded and non-flooded classes and to derive the map of flooding extent. The performance of the classifiers is highly dependent on the set of laser points features used. Best performance was achieved by combining radiometric and geometric laser point features. The accuracy of flooding maps based solely on radiometric features resulted in overall accuracies of up to 70% and was limited due to the overlap of the backscattering coefficient values between water and other land cover classes. Our point-based classification methods assure a high mapping accuracy (similar to 89%) and demonstrate the potential of using full-waveform ALS data to detect water surfaces on floodplain areas with limited water surface exposition through the vegetation canopy. (C) 2016 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
Remote sensing technology serves as a powerful tool for analyzing geospatial characteristics of flood inundation events at various scales. However, the performance of remote sensing methods depends heavily on the flood characteristics and landscape settings. Difficulties might be encountered in mapping the extent of localized flooding with shallow water on riverine floodplain areas, where patches of herbaceous vegetation are interspersed with open water surfaces. To address the difficulties in mapping inundation on areas with complex water and vegetation compositions, a high spatial resolution dataset has to be used to reduce the problem of mixed pixels. The main objective of our study was to investigate the possibilities of using a single date WorldView-2 image of very high spatial resolution and supporting data to analyze spatial patterns of localized flooding on a riverine floodplain. We used a decision tree algorithm with various combinations of input variables including spectral bands of the WorldView-2 image, selected spectral indices dedicated to mapping water surfaces and vegetation, and topographic data. The overall accuracies of the twelve flood extent maps derived with the decision tree method and performed on both pixels and image objects ranged between 77% and 95%. The highest mapping overall accuracy was achieved with a method that utilized all available input data and the object-based image analysis. Our study demonstrates the possibility of using single date WorldView-2 data for analyzing flooding events at high spatial detail despite the absence of spectral bands from the short-waveform region that are frequently used in water related studies. Our study also highlights the importance of topographic data in inundation analyses. The greatest difficulties were met in mapping water surfaces under dense canopy herbaceous vegetation, due to limited water surface exposure and the dominance of vegetation reflectance.
Natural catchments are likely to show the existence of knickpoints in their river networks. The origin and genesis of the knickpoints can be manifold, considering that the present morphology is the result of the interactions of different factors such as tectonic movements, quaternary glaciations, river captures, variable lithology, and base-level changes. We analyzed the longitudinal profiles of the river channels in the Stura di Demonte Valley (Maritime Alps) to identify the knickpoints of such an alpine setting and to characterize their origins. The distribution and the geometry of stream profiles were used to identify the possible causes of the changes in stream gradients and to define zones with genetically linked knickpoints. Knickpoints are key geomorphological features for reconstructing the evolution of fluvial dissected basins, when the different perturbing factors affecting the ideally graded fluvial system have been detected. This study shows that even in a regionally small area, perturbations of river profiles are caused by multiple factors. Thus, attributing (automatically)-extracted knickpoints solely to one factor, can potentially lead to incomplete interpretations of catchment evolution.
Natural catchments are likely to show the existence of knickpoints in their river networks. The origin and genesis of the knickpoints can be manifold, considering that the present morphology is the result of the interactions of different factors such as tectonic movements, quaternary glaciations, river captures, variable lithology, and base-level changes. We analyzed the longitudinal profiles of the river channels in the Stura di Demonte Valley (Maritime Alps) to identify the knickpoints of such an alpine setting and to characterize their origins. The distribution and the geometry of stream profiles were used to identify the possible causes of the changes in stream gradients and to define zones with genetically linked knickpoints. Knickpoints are key geomorphological features for reconstructing the evolution of fluvial dissected basins, when the different perturbing factors affecting the ideally graded fluvial system have been detected. This study shows that even in a regionally small area, perturbations of river profiles are caused by multiple factors. Thus, attributing (automatically)-extracted knickpoints solely to one factor, can potentially lead to incomplete interpretations of catchment evolution.
Roads at risk
(2015)
Globalisation and interregional exchange of people, goods, and services has boosted the importance of and reliance on all kinds of transport networks. The linear structure of road networks is especially sensitive to natural hazards. In southern Norway, steep topography and extreme weather events promote frequent traffic disruption caused by debris flows. Topographic susceptibility and trigger frequency maps serve as input into a hazard appraisal at the scale of first-order catchments to quantify the impact of debris flows on the road network in terms of a failure likelihood of each link connecting two network vertices, e.g. road junctions. We compute total additional traffic loads as a function of traffic volume and excess distance, i.e. the extra length of an alternative path connecting two previously disrupted network vertices using a shortest-path algorithm. Our risk metric of link failure is the total additional annual traffic load, expressed as vehicle kilometres, because of debris-flow-related road closures. We present two scenarios demonstrating the impact of debris flows on the road network and quantify the associated path-failure likelihood between major cities in southern Norway. The scenarios indicate that major routes crossing the central and north-western part of the study area are associated with high link-failure risk. Yet options for detours on major routes are manifold and incur only little additional costs provided that drivers are sufficiently well informed about road closures. Our risk estimates may be of importance to road network managers and transport companies relying on speedy delivery of services and goods.
Roads at risk
(2015)
Globalisation and interregional exchange of people, goods, and services has boosted the importance of and reliance on all kinds of transport networks. The linear structure of road networks is especially sensitive to natural hazards. In southern Norway, steep topography and extreme weather events promote frequent traffic disruption caused by debris flows. Topographic susceptibility and trigger frequency maps serve as input into a hazard appraisal at the scale of first-order catchments to quantify the impact of debris flows on the road network in terms of a failure likelihood of each link connecting two network vertices, e.g. road junctions. We compute total additional traffic loads as a function of traffic volume and excess distance, i.e. the extra length of an alternative path connecting two previously disrupted network vertices using a shortest-path algorithm. Our risk metric of link failure is the total additional annual traffic load, expressed as vehicle kilometres, because of debris-flow-related road closures. We present two scenarios demonstrating the impact of debris flows on the road network and quantify the associated path-failure likelihood between major cities in southern Norway. The scenarios indicate that major routes crossing the central and north-western part of the study area are associated with high link-failure risk. Yet options for detours on major routes are manifold and incur only little additional costs provided that drivers are sufficiently well informed about road closures. Our risk estimates may be of importance to road network managers and transport companies relying on speedy delivery of services and goods.
The Norwegian traffic network is impacted by about 2000 landslides, avalanches, and debris flows each year that incur high economic losses. Despite the urgent need to mitigate future losses, efforts to locate potential debris flow source areas have been rare at the regional scale. We tackle this research gap by exploring a minimal set of possible topographic predictors of debris flow initiation that we input to a Weights-of-Evidence (WofE) model for mapping the regional susceptibility to debris flows in western Norway. We use an inventory of 429 debris flows that were recorded between 1979 and 2008, and use the terrain variables of slope, total curvature, and contributing area (flow accumulation) to compute the posterior probabilities of local debris flow occurrence. The novelty of our approach is that we quantify the uncertainties in the WofE approach arising from different predictor classification schemes and data input, while estimating model accuracy and predictive performance from independent test data. Our results show that a percentile-based classification scheme excels over a manual classification of the predictor variables because differing abundances in manually defined bins reduce the reliability of the conditional independence tests, a key, and often neglected, prerequisite for the WofE method. The conditional dependence between total curvature and flow accumulation precludes their joint use in the model. Slope gradient has the highest true positive rate (88%), although the fraction of area classified as susceptible is very large (37%). The predictive performance, i.e. the reduction of false positives, is improved when combined with either total curvature or flow accumulation. Bootstrapping shows that the combination of slope and flow accumulation provides more reliable predictions than the combination of slope and total curvature, and helps refining the use of slope-area plots for identifying morphometric fingerprints of debris flow source areas, an approach used outside the field of landslide susceptibility assessments.
Increased rates of glacier retreat and thinning need accurate local estimates of glacier elevation change to predict future changes in glacier runoff and their contribution to sea level rise. Glacier elevation change is typically derived from digital elevation models (DEMs) tied to surface change analysis from satellite imagery. Yet, the rugged topography in mountain regions can cast shadows onto glacier surfaces, making it difficult to detect local glacier elevation changes in remote areas. A rather untapped resource comprises precise, time-stamped metadata on the solar position and angle in satellite images. These data are useful for simulating shadows from a given DEM. Accordingly, any differences in shadow length between simulated and mapped shadows in satellite images could indicate a change in glacier elevation relative to the acquisition date of the DEM. We tested this hypothesis at five selected glaciers with long-term monitoring programmes. For each glacier, we projected cast shadows onto the glacier surface from freely available DEMs and compared simulated shadows to cast shadows mapped from ∼40 years of Landsat images. W validated the relative differences with geodetic measurements of glacier elevation change where these shadows occurred. We find that shadow-derived glacier elevation changes are consistent with independent photogrammetric and geodetic surveys in shaded areas. Accordingly, a shadow cast on Baltoro Glacier (the Karakoram, Pakistan) suggests no changes in elevation between 1987 and 2020, while shadows on Great Aletsch Glacier (Switzerland) point to negative thinning rates of about 1 m yr−1 in our sample. Our estimates of glacier elevation change are tied to occurrence of mountain shadows and may help complement field campaigns in regions that are difficult to access. This information can be vital to quantify possibly varying elevation-dependent changes in the accumulation or ablation zone of a given glacier. Shadow-based retrieval of glacier elevation changes hinges on the precision of the DEM as the geometry of ridges and peaks constrains the shadow that we cast on the glacier surface. Future generations of DEMs with higher resolution and accuracy will improve our method, enriching the toolbox for tracking historical glacier mass balances from satellite and aerial images.
In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students reveal benefits, such as better orientation in the study area, higher interactivity with the data, improved discourse among students and enhanced motivation through immersive 3D geovisualization. This suggests that immersive 3D visualization can effectively be used in higher education and that 3D CAVE settings enhance interactive learning between students.
Graph theory has long been used in quantitative geography and landscape ecology and has been applied in Earth and atmospheric sciences for several decades. Recently, however, there have been increased, and more sophisticated, applications of graph theory concepts and methods in geosciences, principally in three areas: spatially explicit modeling, small-world networks, and structural models of Earth surface systems. This paper reviews the contrasting goals and methods inherent in these approaches, but focuses on the common elements, to develop a synthetic view of graph theory in the geosciences. Techniques applied in geosciences are mainly of three types: connectivity measures of entire networks; metrics of various aspects of the importance or influence of particular nodes, links, or regions of the network; and indicators of system dynamics based on graph adjacency matrices. Geoscience applications of graph theory can be grouped in five general categories: (1) Quantification of complex network properties such as connectivity, centrality, and clustering; (2) Tests for evidence of particular types of structures that have implications for system behavior, such as small-world or scale-free networks; (3) Testing dynamical system properties, e.g., complexity, coherence, stability, synchronization, and vulnerability; (4) Identification of dynamics from historical records or time series; and (5) spatial analysis. Recent and future expansion of graph theory in geosciences is related to general growth of network-based approaches. However, several factors make graph theory especially well suited to the geosciences: Inherent complexity, exploration of very large data sets, focus on spatial fluxes and interactions, and increasing attention to state transitions are all amenable to analysis using graph theory approaches. (C) 2015 Elsevier B.V. All rights reserved.
Drainage divide networks
(2020)
Drainage divides are organized into tree-like networks that may record information about drainage divide mobility. However, views diverge about how to best assess divide mobility. Here, we apply a new approach of automatically extracting and ordering drainage divide networks from digital elevation models to results from landscape evolution model experiments. We compared landscapes perturbed by strike-slip faulting and spatiotemporal variations in erodibility to a reference model to assess which topographic metrics (hillslope relief, flow distance, and chi) are diagnostic of divide mobility. Results show that divide segments that are a minimum distance of similar to 5 km from river confluences strive to attain constant values of hillslope relief and flow distance to the nearest stream. Disruptions of such patterns can be related to mobile divides that are lower than stable divides, closer to streams, and often asymmetric in shape. In general, we observe that drainage divides high up in the network, i.e., at great distances from river confluences, are more susceptible to disruptions than divides closer to these confluences and are thus more likely to record disturbance for a longer time period. We found that across-divide differences in hillslope relief proved more useful for assessing divide migration than other tested metrics. However, even stable drainage divide networks exhibit across-divide differences in any of the studied topographic metrics. Finally, we propose a new metric to quantify the connectivity of divide junctions.
We propose a novel way to measure and analyze networks of drainage divides from digital elevation models. We developed an algorithm that extracts drainage divides based on the drainage basin boundaries defined by a stream network. In contrast to streams, there is no straightforward approach to order and classify divides, although it is intuitive that some divides are more important than others. A meaningful way of ordering divides is the average distance one would have to travel down on either side of a divide to reach a common stream location. However, because measuring these distances is computationally expensive and prone to edge effects, we instead sort divide segments based on their tree-like network structure, starting from endpoints at river confluences. The sorted nature of the network allows for assigning distances to points along the divides, which can be shown to scale with the average distance downslope to the common stream location. Furthermore, because divide segments tend to have characteristic lengths, an ordering scheme in which divide orders increase by 1 at junctions mimics these distances. We applied our new algorithm to the Big Tujunga catchment in the San Gabriel Mountains of southern California and studied the morphology of the drainage divide network. Our results show that topographic metrics, like the downstream flow distance to a stream and hillslope relief, attain characteristic values that depend on the drainage area threshold used to derive the stream network. Portions along the divide network that have lower than average relief or are closer than average to streams are often distinctly asymmetric in shape, suggesting that these divides are unstable. Our new and automated approach thus helps to objectively extract and analyze divide networks from digital elevation models.
Geomorphic footprints of past large Himalayan earthquakes are elusive, although they are urgently needed for gauging and predicting recovery times of seismically perturbed mountain landscapes. We present evidence of catastrophic valley infill following at least three medieval earthquakes in the Nepal Himalaya. Radiocarbon dates from peat beds, plant macrofossils, and humic silts in fine-grained tributary sediments near Pokhara, Nepal’s second-largest city, match the timing of nearby M > 8 earthquakes in ~1100, 1255, and 1344 C.E. The upstream dip of tributary valley fills and x-ray fluorescence spectrometry of their provenance rule out local sources. Instead, geomorphic and sedimentary evidence is consistent with catastrophic fluvial aggradation and debris flows that had plugged several tributaries with tens of meters of calcareous sediment from a Higher Himalayan source >60 kilometers away.
Digital flow networks derived from digital elevation models (DEMs) sensitively react to errors due to measurement, data processing and data representation. Since high-resolution DEMs are increasingly used in geomorphological and hydrological research, automated and semi-automated procedures to reduce the impact of such errors on flow networks are required. One such technique is stream-carving, a hydrological conditioning technique to ensure drainage connectivity in DEMs towards the DEM edges. Here we test and modify a state-of-the-art carving algorithm for flow network derivation in a low-relief, agricultural landscape characterized by a large number of spurious, topographic depressions. Our results show that the investigated algorithm reconstructs a benchmark network insufficiently in terms of carving energy, distance and a topological network measure. The modification to the algorithm that performed best, combines the least-cost auxiliary topography (LCAT) carving with a constrained breaching algorithm that explicitly takes automatically identified channel locations into account. We applied our methods to a low relief landscape, but the results can be transferred to flow network derivation of DEMs in moderate to mountainous relief in situations where the valley bottom is broad and flat and precise derivations of the flow networks are needed.
The assessment of uncertainty is a major challenge in geomorphometry. Methods to quantify uncertainty in digital elevation models (DEM) are needed to assess and report derivatives such as drainage basins. While Monte-Carlo (MC) techniques have been developed and employed to assess the variability of second-order derivatives of DEMs, their application requires explicit error modeling and numerous simulations to reliably calculate error bounds. Here, we develop an analytical model to quantify and visualize uncertainty in drainage basin delineation in DEMs. The model is based on the assumption that multiple flow directions (MFD) represent a discrete probability distribution of non-diverging flow networks. The Shannon Index quantifies the uncertainty of each cell to drain into a specific drainage basin outlet. In addition, error bounds for drainage areas can be derived. An application of the model shows that it identifies areas in a DEM where drainage basin delineation is highly uncertain owing to flow dispersion on convex landforms such as alluvial fans. The model allows for a quantitative assessment of the magnitudes of expected drainage area variability and delivers constraints for observed volatile hydrological behavior in a palaeoenvironmental record of lake level change. Since the model cannot account for all uncertainties in drainage basin delineation we conclude that a joint application with MC techniques is promising for an efficient and comprehensive error assessment in the future.
Plain Language Summary The 2015 Gorkha earthquake in Nepal caused severe losses in the hydropower sector. The country temporarily lost similar to 20% of its hydropower capacity, and >30 hydropower projects were damaged. The projects hit hardest were those that were affected by earthquake-triggered landslides. We show that these projects are located along very steep rivers with towering sidewalls that are prone to become unstable during strong seismic ground shaking. A statistical classification based on a topographic metric that expresses river steepness and earthquake ground acceleration is able to approximately predict hydropower damage during future earthquakes, based on successful testing of past cases. Thus, our model enables us to estimate earthquake damages to hydropower projects in other parts of the Himalayas. We find that >10% of the Himalayan drainage network may be unsuitable for hydropower infrastructure given high probabilities of high earthquake damages.
Knickpoints in longitudinal river profiles are proxies for the climatic and tectonic history of active mountains. The analysis of river profiles commonly relies on the assumption that drainage network configurations are stable. Here, we show that this assumption must be made cautiously if changes in contributing area are fast relative to knickpoint migration rates. We studied the Parachute Creek basin in the Roan Plateau, Colorado, United States, where knickpoint retreat occurs in horizontally uniform lithology so that drainage area is the sole governing variable. In this basin, we identified an anomalous catchment in the degree to which a stream power-based model predicted knickpoint locations. The catchment is experiencing area loss as the plateau edge is eroded by cliff migration in proximity to the Colorado River. Model predictions improve if the plateau edge is assumed to have migrated over the time scale of knickpoint retreat. Finally, a Lagrangian model of knickpoint migration enabled us to study the kinematic links between drainage area loss and knickpoint migration and offered constraints on the temporal aspects of area loss. Modeled onset and amount of area loss are consistent with cliff retreat rates along the margin of the Roan Plateau inferred from the incisional history of the upper Colorado River.
TopoToolbox is a MATLAB program for the analysis of digital elevation models (DEMs). With the release of version 2, the software adopts an object-oriented programming (OOP) approach to work with gridded DEMs and derived data such as flow directions and stream networks. The introduction of a novel technique to store flow directions as topologically ordered vectors of indices enables calculation of flow-related attributes such as flow accumulation similar to 20 times faster than conventional algorithms while at the same time reducing memory overhead to 33% of that required by the previous version. Graphical user interfaces (GUIs) enable visual exploration and interaction with DEMs and derivatives and provide access to tools targeted at fluvial and tectonic geomorphologists. With its new release, TopoToolbox has become a more memory-efficient and faster tool for basic and advanced digital terrain analysis that can be used as a framework for building hydrological and geomorphological models in MATLAB.
Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques
(2017)
The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.
Bumps in river profiles
(2017)
The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
Himalayan water resources attract a rapidly growing number of hydroelectric power projects (HPP) to satisfy Asia's soaring energy demands. Yet HPP operating or planned in steep, glacier-fed mountain rivers face hazards of glacial lake outburst floods (GLOFs) that can damage hydropower infrastructure, alter water and sediment yields, and compromise livelihoods downstream. Detailed appraisals of such GLOF hazards are limited to case studies, however, and a more comprehensive, systematic analysis remains elusive. To this end we estimate the regional exposure of 257 Himalayan HPP to GLOFs, using a flood-wave propagation model fed by Monte Carlo-derived outburst volumes of >2300 glacial lakes. We interpret the spread of thus modeled peak discharges as a predictive uncertainty that arises mainly from outburst volumes and dam-breach rates that are difficult to assess before dams fail. With 66% of sampled HPP are on potential GLOF tracks, up to one third of these HPP could experience GLOF discharges well above local design floods, as hydropower development continues to seek higher sites closer to glacial lakes. We compute that this systematic push of HPP into headwaters effectively doubles the uncertainty about GLOF peak discharge in these locations. Peak discharges farther downstream, in contrast, are easier to predict because GLOF waves attenuate rapidly. Considering this systematic pattern of regional GLOF exposure might aid the site selection of future Himalayan HPP. Our method can augment, and help to regularly update, current hazard assessments, given that global warming is likely changing the number and size of Himalayan meltwater lakes.
The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.
The evaluation and verification of landscape evolution models (LEMs) has long been limited by a lack of suitable observational data and statistical measures which can fully capture the complexity of landscape changes. This lack of data limits the use of objective function based evaluation prolific in other modelling fields, and restricts the application of sensitivity analyses in the models and the consequent assessment of model uncertainties. To overcome this deficiency, a novel model function approach has been developed, with each model function representing an aspect of model behaviour, which allows for the application of sensitivity analyses. The model function approach is used to assess the relative sensitivity of the CAESAR-Lisflood LEM to a set of model parameters by applying the Morris method sensitivity analysis for two contrasting catchments. The test revealed that the model was most sensitive to the choice of the sediment transport formula for both catchments, and that each parameter influenced model behaviours differently, with model functions relating to internal geomorphic changes responding in a different way to those relating to the sediment yields from the catchment outlet. The model functions proved useful for providing a way of evaluating the sensitivity of LEMs in the absence of data and methods for an objective function approach.
Channel steepness index, k(s), is a metric derived from the stream power model that, under certain conditions, scales with relative rock uplift rate. Channel steepness index is a property of rivers, which can be relatively easily extracted from digital elevation models (DEMs). As DEM data sets are widely available for Earth and are becoming more readily available for other planetary bodies, channel steepness index represents a powerful tool for interpreting tectonic processes. However, multiple approaches to calculate channel steepness index exist. From this several important questions arise; does choice of approach change the values of channel steepness index, can values be so different that choice of approach can influence the findings of a study, and are certain approaches better than others? With the aid of a synthetic river profile and a case study from the Sierra Nevada, California, we show that values of channel steepness index vary over orders of magnitude according to the methodology used in the calculation. We explore the limitations, advantages and disadvantages of the key approaches to calculating channel steepness index, and find that choosing an appropriate approach relies on the context of a study. Given these observations, it is important that authors acknowledge the methodology used to calculate channel steepness index, to ensure that results can be contextualised and reproduced.
Mountain rivers respond to strong earthquakes by rapidly aggrading to accommodate excess sediment delivered by co-seismic landslides. Detailed sediment budgets indicate that rivers need several years to decades to recover from seismic disturbances, depending on how recovery is defined. We examine three principal proxies of river recovery after earthquake-induced sediment pulses around Pokhara, Nepal's second largest city. Freshly exhumed cohorts of floodplain trees in growth position indicate rapid and pulsed sedimentation that formed a fan covering 150 km2 in a Lesser Himalayan basin with tens of metres of debris between the 11th and 15th centuries AD. Radiocarbon dates of buried trees are consistent with those of nearby valley deposits linked to major medieval earthquakes, such that we can estimate average rates of re-incision since. We combine high-resolution digital elevation data, geodetic field surveys, aerial photos, and dated tree trunks to reconstruct geomorphic marker surfaces. The volumes of sediment relative to these surfaces require average net sediment yields of up to 4200 t km–2 yr–1 for the 650 years since the last inferred earthquake-triggered sediment pulse. The lithological composition of channel bedload differs from that of local bedrock, confirming that rivers are still mostly evacuating medieval valley fills, locally incising at rates of up to 0.2 m yr–1. Pronounced knickpoints and epigenetic gorges at tributary junctions further illustrate the protracted fluvial response; only the distal portions of the earthquake-derived sediment wedges have been cut to near their base. Our results challenge the notion that mountain rivers recover speedily from earthquakes within years to decades. The valley fills around Pokhara show that even highly erosive Himalayan rivers may need more than several centuries to adjust to catastrophic perturbations. Our results motivate some rethinking of post-seismic hazard appraisals and infrastructural planning in active mountain regions.
A dataset of 2184 field measurements reported in the literature was used to evaluate the predictive capability of eight conventional flow resistance equations to predict the mean flow velocity in gravel-bed rivers. The results reveal considerable disagreement with the observed flow velocities for relative submergence less than 4 and for the non-uniformity of the bed material greater than 7.5 for all the equations. However, the predictions made using the Smart and Jaggi (1983), Ferguson (2007), and Rickenmann and Recking (2011) equations were closer to the observed values. Furthermore, bedload sediment transport also reduces the predictive capability of the equations considered in this study except for the Recking et al. (2008) equation, which was developed consid- ering active bedload transport. The performance of flow resistance equations improves when corrected by considering the geometric standard deviation of the bed material. Here we present an empirical approach using the whole dataset and its subsets for accounting for the additional energy losses occurring due to the wake vortices, spill losses, and free surface instabilities occurring due to the protrusions from the bed. The results obtained using the validation dataset shows the importance and usefulness of this approach to account for the additional energy losses, especially for the Strickler (1923) and Keulegan (1938) equations.
Badlands have long been considered as model landscapes due to their perceived close relationship between form and process. The often intense features of erosion have also attracted many geomorphologists because the associated high rates of erosion appeared to offer the opportunity for studying surface processes and the resulting forms. Recently, the perceived simplicity of badlands has been questioned because the expected relationships between driving forces for erosion and the resulting sediment yield could not be observed. Further, a high variability in erosion and sediment yield has been observed across scales. Finally, denudation based on currently observed erosion rates would have lead to the destruction of most badlands a long time ago. While the perceived simplicity of badlands has sparked a disproportional (compared to the land surface they cover) amount of research, our increasing amount of information has not necessarily increased our understanding of badlands in equal terms. Overall, badlands appear to be more complex than initially assumed. In this paper, we review 40 years of research in the Zin Valley Badlands in Israel to reconcile some of the conflicting results observed there and develop a perspective on the function of badlands as model landscapes. While the data collected in the Zin Valley clearly confirm that spatial and temporal patterns of geomorphic processes and their interaction with topography and surface properties have to be understood, we still conclude that the process of realizing complexity in the "simple" badlands has a model function both for our understanding as well as perspective on all landscape systems.