Refine
Year of publication
- 2017 (298) (remove)
Document Type
- Article (229)
- Doctoral Thesis (28)
- Postprint (20)
- Other (11)
- Review (5)
- Part of a Book (2)
- Bachelor Thesis (1)
- Conference Proceeding (1)
- Master's Thesis (1)
Keywords
- Holocene (9)
- climate change (5)
- Arctic (3)
- Chile (3)
- Climate change (3)
- Erosion (3)
- Palaeoclimate (3)
- Tibetan Plateau (3)
- change detection (3)
- earthquake (3)
Institute
- Institut für Geowissenschaften (298) (remove)
In general, a moderate drying trend is observed in mid-latitude arid Central Asia since the Mid-Holocene, attributed to the progressively weakening influence of the mid-latitude Westerlies on regional climate. However, as the spatio-temporal pattern of this development and the underlying climatic mechanisms are yet not fully understood, new high-resolution paleoclimate records from this region are needed. Within this study, a sediment core from Lake Son Kol (Central Kyrgyzstan) was investigated using sedimentological, (bio) geochemical, isotopic, and palynological analyses, aiming at reconstructing regional climate development during the last 6000 years. Biogeochemical data, mainly reflecting summer moisture conditions, indicate predominantly wet conditions until 4950 cal. yr BP, succeeded by a pronounced dry interval between 4950 and 3900 cal. yr BP. In the following, a return to wet conditions and a subsequent moderate drying trend until present times are observed. This is consistent with other regional paleoclimate records and likely reflects the gradual Late Holocene diminishment of the amount of summer moisture provided by the mid-latitude Westerlies. However, climate impact of the Westerlies was apparently not only restricted to the summer season but also significant during winter as indicated by recurrent episodes of enhanced allochthonous input through snowmelt, occurring before 6000 cal. yr BP and at 5100-4350, 3450-2850, and 1900-1500 cal. yr BP. The distinct similar to 1500year periodicity of these episodes of increased winter precipitation in Central Kyrgyzstan resembles similar cyclicities observed in paleoclimate records around the North Atlantic, likely indicating a hemispheric-scale climatic teleconnection and an impact of North Atlantic Oscillation (NAO) variability in Central Asia.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
(2017)
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km−2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79–0.85. Testing the method for a larger area of 226.3 km−2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
Regional snow-avalanche detection using object-based image analysis of near-infrared aerial imagery
(2017)
Snow avalanches are destructive mass movements in mountain regions that continue to claim lives and cause infrastructural damage and traffic detours. Given that avalanches often occur in remote and poorly accessible steep terrain, their detection and mapping is extensive and time consuming. Nonetheless, systematic avalanche detection over large areas could help to generate more complete and up-to-date inventories (cadastres) necessary for validating avalanche forecasting and hazard mapping. In this study, we focused on automatically detecting avalanches and classifying them into release zones, tracks, and run-out zones based on 0.25 m near-infrared (NIR) ADS80-SH92 aerial imagery using an object-based image analysis (OBIA) approach. Our algorithm takes into account the brightness, the normalised difference vegetation index (NDVI), the normalised difference water index (NDWI), and its standard deviation (SDNDWI) to distinguish avalanches from other land-surface elements. Using normalised parameters allows applying this method across large areas. We trained the method by analysing the properties of snow avalanches at three 4 km−2 areas near Davos, Switzerland. We compared the results with manually mapped avalanche polygons and obtained a user's accuracy of > 0.9 and a Cohen's kappa of 0.79–0.85. Testing the method for a larger area of 226.3 km−2, we estimated producer's and user's accuracies of 0.61 and 0.78, respectively, with a Cohen's kappa of 0.67. Detected avalanches that overlapped with reference data by > 80 % occurred randomly throughout the testing area, showing that our method avoids overfitting. Our method has potential for large-scale avalanche mapping, although further investigations into other regions are desirable to verify the robustness of our selected thresholds and the transferability of the method.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.
High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade 1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes than trends in snowmelt end. (2) Areas with long snowmelt periods, such as the Tibetan Plateau, show the strongest compression of the snowmelt season (negative trends). These trends are apparent regardless of the time period over which the regression is performed. (3) While trends averaged over 3 decades indicate generally earlier snowmelt seasons, data from the last 14 years (2002-2016) exhibit positive trends in many regions, such as parts of the Pamir and Kunlun Shan. Due to the short nature of the time series, it is not clear whether this change is a reversal of a long-term trend or simply interannual variability. (4) Some regions with stable or growing glaciers - such as the Karakoram and Kunlun Shan - see slightly later snowmelt seasons and longer snowmelt periods. It is likely that changes in the snowmelt regime of HMA account for some of the observed heterogeneity in glacier response to climate change. While the decadal increases in regional temperature have in general led to earlier and shortened melt seasons, changes in HMA's cryosphere have been spatially and temporally heterogeneous.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.
Mentor Texts
(2017)
The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all.
Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation.
In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.
The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all.
Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation.
In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
In 2009, a group of prominent Earth scientists introduced the "planetary boundaries" (PB) framework: they suggested nine global control variables, and defined corresponding "thresholds which, if crossed, could generate unacceptable environmental change". The concept builds on systems theory, and views Earth as a complex adaptive system in which anthropogenic disturbances may trigger non-linear, abrupt, and irreversible changes at the global scale, and "push the Earth system outside the stable environmental state of the Holocene". While the idea has been remarkably successful in both science and policy circles, it has also raised fundamental concerns, as the majority of suggested processes and their corresponding planetary boundaries do not operate at the global scale, and thus apparently lack the potential to trigger abrupt planetary changes.
This paper picks up the debate with specific regard to the planetary boundary on "global freshwater use". While the bio-physical impacts of excessive water consumption are typically confined to the river basin scale, the PB proponents argue that water-induced environmental disasters could build up to planetary-scale feedbacks and system failures. So far, however, no evidence has been presented to corroborate that hypothesis. Furthermore, no coherent approach has been presented to what extent a planetary threshold value could reflect the risk of regional environmental disaster. To be sure, the PB framework was revised in 2015, extending the planetary freshwater boundary with a set of basin-level boundaries inferred from environmental water flow assumptions. Yet, no new evidence was presented, either with respect to the ability of those basin-level boundaries to reflect the risk of regional regime shifts or with respect to a potential mechanism linking river basins to the planetary scale.
So while the idea of a planetary boundary on freshwater use appears intriguing, the line of arguments presented so far remains speculative and implicatory. As long as Earth system science does not present compelling evidence, the exercise of assigning actual numbers to such a boundary is arbitrary, premature, and misleading. Taken as a basis for water-related policy and management decisions, though, the idea transforms from misleading to dangerous, as it implies that we can globally offset water-related environmental impacts. A planetary boundary on freshwater use should thus be disapproved and actively refuted by the hydrological and water resources community.
Natural and potentially hazardous events occur on the Earth’s surface every day. The most destructive of these processes must be monitored, because they may cause loss of lives, infrastructure, and natural resources, or have a negative effect on the environment. A variety of remote sensing technologies allow the recoding of data to detect these processes in the first place, partly based on the diagnostic landforms that they form. To perform this effectively, automatic methods are desirable.
Universal detection of natural hazards is challenging due to their differences in spatial impacts, timing and longevity of consequences, and the spatial resolution of remote-sensing data. Previous studies have reported that topographic metrics such as roughness, which can be captured from digital elevation data, can reveal landforms diagnostic of natural hazards, such as gullies, dunes, lava fields, landslides and snow avalanches, as these landforms tend to be more heterogeneous than the surrounding landscape. A single roughness metric is often limited in such detections; however, a more complex approach that exploits the spatial relation and the location of objects, such as object-based image analysis (OBIA), is desirable.
In this thesis, I propose a topographic roughness measure derived from an airborne laser scanning (ALS) digital terrain model (DTM) and discuss its performance in detecting landforms principally diagnostic of natural hazards. I further develop OBIA-based algorithms for the detection of snow avalanches using near-infrared (NIR) aerial images, and the size (changes) of mountain lakes using LANDSAT satellite images. I quantitatively test and document how the level of difficulty in detecting these very challenging landforms depends on the input data resolution, the derivatives that could be evaluated from images and DTMs, the size, shape and complexity of landforms, and the capabilities of obtaining the information in the data. I demonstrate that surface roughness is a promising metric for detecting different landforms in diverse environments, and that OBIA assists significantly in detecting parts of lakes and snow avalanches that may not be correctly assigned by applying only the thresholding of spectral properties of data and their derivatives.
The curvature-based surface roughness parameter allows the detection of gullies, dunes, lava fields and landslides with a user’s accuracy of 0.63, 0.21, 0.53, and 0.45, respectively. The OBIA algorithms for detecting lakes and snow avalanches obtained user’s accuracy of 0.98, and 0.78, respectively. Most of the analysed landforms constituted only a small part of the entire dataset, and therefore the user’s accuracy is the most appropriate performance measure that should be given in a such classification, because it tells how many automatically-extracted pixels in fact represent the object that one wants to classify, and its calculation does not take the second (background) class into account. One advantage of the proposed roughness parameter is that it allows the extraction of the heterogeneity of the surface without the need for data detrending. The OBIA approach is novel in that it allows the classification of lakes regardless of the physical state of their water, and also allows the separation of frozen lakes from glaciers that have very similar water indices used in purely optical remote sensing applications. The algorithm proposed for snow avalanches allows the detection of release zones, tracks, and deposition zones by verifying the snow heterogeneity based on a roughness metric evaluated from a water index, and by analysing the local relation of segments with their neighbouring objects. This algorithm contains few steps, which allows for the simultaneous classification of avalanches that occur on diverse mountain slopes and differ in size and shape.
This thesis contributes to natural hazard research as it provides automatic solutions to tracking six different landforms that are diagnostic of natural hazards over large regions. This is a step toward delineating areas susceptible to the processes producing these landforms and the improvement of hazard maps.
Die genauen Einsatzzeiten seismischer P-Phasen von Erdbeben werden in SeisComP3 und anderen Auswerteprogrammen standardmäßig und in Echtzeit automatisch bestimmt. S-Phasen stellen dagegen eine weit größere Herausforderung dar. Nur mit genauen Picks der P- bzw. S-Phasen können die Erdbebenlokationen korrekt und stabil bestimmt werden. Darum besteht erhebliches Interesse, diese mit hoher Genauigkeit zu bestimmen. Das Ziel der vorliegenden Bachelorarbeit war es, vier verschiedene, bereits vorhandene S-Phasenpicker auf ausgewählte Parameter optimal zu konfigurieren, auf Testdaten anzuwenden und deren Leistungsfähigkeit objektiv zu bewerten. Dazu wurden ein S-Picker (S-L2) aus dem OpenSource SeisComp3-Programmpaket, zwei S-Picker (S-AIC, S-AIC-V) als kommerzielles Modul der Firma gempa GmbH für SeisComP3 und ein S-Picker (Frequenzband) aus dem OpenSource PhasePaPy-Paket ausgewählt. Die Bewertung erfolgte durch Vergleich automatischer Picks mit manuell bestimmten Einsatzzeiten. Alle vier Picker wurden separat konfiguriert und auf drei verschiedene Datensätze von Erdbeben in N-Chile und im Vogtland, Deutschland, angewandt. Dazu wurden regional bzw. lokal typische Erdbeben zufällig ausgewählt und die P- und S-Phasen manuell bestimmt. Mit den zu testenden S-Pickeralgorithmen wurden dieselben Daten durchsucht und die Picks automatisch bestimmt. Die Konfigurationen der Picker wurden gleichzeitig automatisch und objektiv durch iterative Anpassung optimiert. Ein neu erstelltes Bewertungssystem vergleicht die manuellen und die automatisch gefundenen S-Picks anhand von definierten Qualitätsfaktoren. Die Qualitätsfaktoren sind: der Mittelwert und die Standardabweichung der zeitlichen Differenzen zwischen den S-Picks, die Anzahl an übereinstimmenden S-Picks, die Prozentangaben über mögliche S-Picks und die benötigt Rechenzeit. Die objektive Bewertung erfolgte anhand eines Scores. Der Scorewert ergibt sich aus der gewichteten Summe folgender normierter Qualitätsfaktoren: Standardabweichung (20%), Mittelwert (20%) und Prozentangabe über mögliche S-Picks (60%). Konfigurationen mit hohem Score werden bevorzugt. Die bevorzugten Konfigurationen der verschiedenen Picker wurden miteinander verglichen, um den am besten geeigneten S-Pickeralgorithmus zu bestimmen. Allgemein zeigt sich, dass der S-AIC Picker für jeden der drei Datensätze die höchsten Scores und damit die besten Ergebnisse liefert. Dabei wurde für jeden Datensatz ein andere Konfiguration der Parameter des S-AIC Pickers als die am besten geeignete bezeichnet. Daher ist für jede Erdbebenregion eine andere Konfigurationen erforderlich, um optimale Ergebnisse mit diesem S-Picker zu bekommen.
Water infiltration in soil is not only affected by the inherent heterogeneities of soil, but even more by the interaction with plant roots and their water uptake. Neutron tomography is a unique non-invasive 3D tool to visualize plant root systems together with the soil water distribution in situ. So far, acquisition times in the range of hours have been the major limitation for imaging 3D water dynamics. Implementing an alternative acquisition procedure we boosted the speed of acquisition capturing an entire tomogram within 10 s. This allows, for the first time, tracking of a water front ascending in a rooted soil column upon infiltration of deuterated water time-resolved in 3D. Image quality and resolution could be sustained to a level allowing for capturing the root system in high detail. Good signal-to-noise ratio and contrast were the key to visualize dynamic changes in water content and to localize the root uptake. We demonstrated the ability of ultra-fast tomography to quantitatively image quick changes of water content in the rhizosphere and outlined the value of such imaging data for 3D water uptake modelling. The presented method paves the way for time-resolved studies of various 3D flow and transport phenomena in porous systems
Water infiltration in soil is not only affected by the inherent heterogeneities of soil, but even more by the interaction with plant roots and their water uptake. Neutron tomography is a unique non-invasive 3D tool to visualize plant root systems together with the soil water distribution in situ. So far, acquisition times in the range of hours have been the major limitation for imaging 3D water dynamics. Implementing an alternative acquisition procedure we boosted the speed of acquisition capturing an entire tomogram within 10 s. This allows, for the first time, tracking of a water front ascending in a rooted soil column upon infiltration of deuterated water time-resolved in 3D. Image quality and resolution could be sustained to a level allowing for capturing the root system in high detail. Good signal-to-noise ratio and contrast were the key to visualize dynamic changes in water content and to localize the root uptake. We demonstrated the ability of ultra-fast tomography to quantitatively image quick changes of water content in the rhizosphere and outlined the value of such imaging data for 3D water uptake modelling. The presented method paves the way for time-resolved studies of various 3D flow and transport phenomena in porous systems.
Detection and Kirchhoff-type migration of seismic events by use of a new characteristic function
(2017)
The classical method of seismic event localization is based on the picking of body wave arrivals, ray tracing and inversion of travel time data. Travel time picks with small uncertainties are required to produce reliable and accurate results with this kind of source localization. Hence recordings, with a low Signal-to-Noise Ratio (SNR) cannot be used in a travel time based inversion. Low SNR can be related with weak signals from distant and/or low magnitude sources as well as with a high level of ambient noise. Diffraction stacking is considered as an alternative seismic event localization method that enables also the processing of low SNR recordings by mean of stacking the amplitudes of seismograms along a travel time function. The location of seismic event and its origin time are determined based on the highest stacked amplitudes (coherency) of the image function. The method promotes an automatic processing since it does not need travel time picks as input data.
However, applying diffraction stacking may require longer computation times if only limited computer resources are used. Furthermore, a simple diffraction stacking of recorded amplitudes could possibly fail to locate the seismic sources if the focal mechanism leads to complex radiation patterns which typically holds for both natural and induced seismicity.
In my PhD project, I have developed a new work flow for the localization of seismic events which is based on a diffraction stacking approach. A parallelized code was implemented for the calculation of travel time tables and for the determination of an image function to reduce computation time. In order to address the effects from complex source radiation patterns, I also suggest to compute diffraction stacking from a characteristic function (CF) instead of stacking the original wave form data. A new CF, which is called in the following mAIC (modified from Akaike Information Criterion) is proposed. I demonstrate that, the performance of the mAIC does not depend on the chosen length of the analyzed time window and that both P- and S-wave onsets can be detected accurately. To avoid cross-talk between P- and S-waves due to inaccurate velocity models, I separate the P- and S-waves from the mAIC function by making use of polarization attributes. Then, eventually the final image function is represented by the largest eigenvalue as a result of the covariance analysis between P- and S-image functions. Before applying diffraction stacking, I also apply seismogram denoising by using Otsu thresholding in the time-frequency domain.
Results from synthetic experiments show that the proposed diffraction stacking provides reliable results even from seismograms with low SNR=1. Tests with different presentations of the synthetic seismograms (displacement, velocity, and acceleration) shown that, acceleration seismograms deliver better results in case of high SNR, whereas displacement seismograms provide more accurate results in case of low SNR recordings. In another test, different measures (maximum amplitude, other statistical parameters) were used to determine the source location in the final image function. I found that the statistical approach is the preferred method particularly for low SNR.
The work flow of my diffraction stacking method was finally applied to local earthquake data from Sumatra, Indonesia. Recordings from a temporary network of 42 stations deployed for 9 months around the Tarutung pull-apart Basin were analyzed. The seismic event locations resulting from the diffraction stacking method align along a segment of the Sumatran Fault. A more complex distribution of seismicity is imaged within and around the Tarutung Basin. Two lineaments striking N-S were found in the middle of the Tarutung Basin which support independent results from structural geology. These features are interpreted as opening fractures due to local extension. A cluster of seismic events repeatedly occurred in short time which might be related to fluid drainage since two hot springs are observed at the surface near to this cluster.
EnGeoMAP 2.0
(2017)
Algorithms for a rapid analysis of hyperspectral data are becoming more and more important with planned next generation spaceborne hyperspectral missions such as the Environmental Mapping and Analysis Program (EnMAP) and the Japanese Hyperspectral Imager Suite (HISUI), together with an ever growing pool of hyperspectral airborne data. The here presented EnGeoMAP 2.0 algorithm is an automated system for material characterization from imaging spectroscopy data, which builds on the theoretical framework of the Tetracorder and MICA (Material Identification and Characterization Algorithm) of the United States Geological Survey and of EnGeoMAP 1.0 from 2013. EnGeoMAP 2.0 includes automated absorption feature extraction, spatio-spectral gradient calculation and mineral anomaly detection. The usage of EnGeoMAP 2.0 is demonstrated at the mineral deposit sites of Rodalquilar (SE-Spain) and Haib River (S-Namibia) using HyMAP and simulated EnMAP data. Results from Hyperion data are presented as supplementary information.
Model-Based attribution of high-resolution streamflow trends in two alpine basins of Western Austria
(2017)
Several trend studies have shown that hydrological conditions are changing considerably in the Alpine region. However, the reasons for these changes are only partially understood and trend analyses alone are not able to shed much light. Hydrological modelling is one possible way to identify the trend drivers, i.e., to attribute the detected streamflow trends, given that the model captures all important processes causing the trends. We modelled the hydrological conditions for two alpine catchments in western Austria (a large, mostly lower-altitude catchment with wide valley plains and a nested high-altitude, glaciated headwater catchment) with the distributed, physically-oriented WaSiM-ETH model, which includes a dynamical glacier module. The model was calibrated in a transient mode, i.e., not only on several standard goodness measures and glacier extents, but also in such a way that the simulated streamflow trends fit with the observed ones during the investigation period 1980 to 2007. With this approach, it was possible to separate streamflow components, identify the trends of flow components, and study their relation to trends in atmospheric variables. In addition to trends in annual averages, highly resolved trends for each Julian day were derived, since they proved powerful in an earlier, data-based attribution study. We were able to show that annual and highly resolved trends can be modelled sufficiently well. The results provide a holistic, year-round picture of the drivers of alpine streamflow changes: Higher-altitude catchments are strongly affected by earlier firn melt and snowmelt in spring and increased ice melt throughout the ablation season. Changes in lower-altitude areas are mostly caused by earlier and lower snowmelt volumes. All highly resolved trends in streamflow and its components show an explicit similarity to the local temperature trends. Finally, results indicate that evapotranspiration has been increasing in the lower altitudes during the study period.