Refine
Year of publication
- 2024 (4)
- 2023 (14)
- 2022 (66)
- 2021 (73)
- 2020 (97)
- 2019 (75)
- 2018 (82)
- 2017 (38)
- 2016 (9)
- 2015 (5)
- 2014 (15)
- 2013 (34)
- 2012 (21)
- 2011 (10)
- 2010 (6)
- 2009 (30)
- 2008 (19)
- 2007 (11)
- 2006 (24)
- 2005 (15)
- 2004 (20)
- 2003 (21)
- 2002 (13)
- 2001 (20)
- 2000 (18)
- 1999 (3)
- 1998 (10)
- 1997 (2)
- 1996 (2)
- 1995 (6)
- 1994 (2)
- 1993 (1)
- 1992 (2)
Document Type
- Article (534)
- Doctoral Thesis (116)
- Postprint (67)
- Monograph/Edited Volume (21)
- Other (9)
- Review (7)
- Habilitation Thesis (5)
- Master's Thesis (3)
- Part of a Book (2)
- Bachelor Thesis (1)
Language
- English (768) (remove)
Keywords
- climate change (35)
- Curriculum Framework (34)
- European values education (34)
- Europäische Werteerziehung (34)
- Lehrevaluation (34)
- Studierendenaustausch (34)
- Unterrichtseinheiten (34)
- curriculum framework (34)
- lesson evaluation (34)
- student exchange (34)
Institute
- Institut für Umweltwissenschaften und Geographie (768) (remove)
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
While of higher plant origin, a specific source assignment of sedimentary leaf wax n-alkanes remains difficult. In addition, it is unknown how fast a changing catchment vegetation would be reflected in sedimentary leaf wax archives. In particular, for a quantitative interpretation of n-alkane C and H isotope ratios in terms of paleohydrological and paleoecological changes, a better understanding of transfer times and dominant sedimentary sources of leaf wax n-alkanes is required. In this study we tested to what extent compositional changes in leaf wax n-alkanes can be linked to known vegetation changes by comparison with high-resolution palynological data from the same archive. We analyzed leaf wax n-alkane concentrations and distributions in decadal resolution from a sedimentary record from Trzechowskie paleolake (TRZ, northern Poland), covering the Late Glacial to early Holocene (13 360-9940 yr BP). As an additional source indicator of targeted n-alkanes, compound-specific carbon isotopic data have been generated in lower time resolution. The results indicated rapid responses of n-alkane distribution patterns coinciding with major climatic and paleoecological transitions. We found a shift towards higher average chain length (ACL) values at the Allerod-Younger Dryas (YD) transition between 12 680 and 12 600 yr BP, co-evaled with a decreasing contribution of arboreal pollen (mainly Pinus and Betula) and a subsequently higher abundance of pollen derived from herbaceous plants (Poaceae, Cyperaceae, Artemisia), shrubs, and dwarf shrubs (Juniperus and Salix). The termination of the YD was characterized by a successive increase in n-alkane concentrations coinciding with a sharp decrease in ACL values between 11 580 and 11 490 yr BP, reflecting the expansion of woodland vegetation at the YD-Holocene transition. A gradual reversal to longer chain lengths after 11 200 yr BP, together with decreasing n-alkane concentrations, most likely reflects the early Holocene vegetation succession with a decline of Betula. These results show that n-alkane distributions reflect vegetation changes and that a fast (i.e., subdecadal) signal transfer occurred. However, our data also indicate that a standard interpretation of directional changes in biomarker ratios remains difficult. Instead, responses such as changes in ACL need to be discussed in the context of other proxy data. In addition, we find that organic geochemical data integrate different ecological information compared to pollen, since some gymnosperm genera, such as Pinus, produce only a very low amount of n-alkanes and for this reason their contribution may be largely absent from biomarker records. Our results demonstrate that a combination of palynological and n-alkane data can be used to infer the major sedimentary leaf wax sources and constrain leaf wax transport times from the plant source to the sedimentary sink and thus pave the way towards quantitative interpretation of compound-specific hydrogen isotope ratios for paleohydrological reconstructions.
Deepening Understanding
(2012)
Deepening understanding
(2013)
Assignments, curriculum framework and background information as the base of developing lessons
(2012)
1. What are the general strengths of the assignments? 2. Structure of the assignment 3. Resources of the assignment 4. Fostering self-expression 5. How could you improve the assignment? 6. Lack of specific examples 7. Not relating the issue to the students 8. Language Problems 9. Infeasibility to adaptation 10. In what ways was the additional information useful ? How could this be improved? 11. Was the framework useful for you and in what way? 12. In what ways did the assignments reflect the steps identified in the framework?
Each simulation algorithm, including Truncated Gaussian Simulation, Sequential Indicator Simulation and Indicator Kriging is characterized by different operating modes, which variably influence the facies proportion, distribution and association of digital outcrop models, as shown in clastic sediments. A detailed study of carbonate heterogeneity is then crucial to understanding these differences and providing rules for carbonate modelling. Through a continuous exposure of Bajocian carbonate strata, a study window (320 m long, 190 m wide and 30 m thick) was investigated and metre-scale lithofacies heterogeneity was captured and modelled using closely-spaced sections. Ten lithofacies, deposited in a shallow-water carbonate-dominated ramp, were recognized and their dimensions and associations were documented. Field data, including height sections, were georeferenced and input into the model. Four models were built in the present study. Model A used all sections and Truncated Gaussian Simulation during the stochastic simulation. For the three other models, Model B was generated using Truncated Gaussian Simulation as for Model A, Model C was generated using Sequential Indicator Simulation and Model D was generated using Indicator Kriging. These three additional models were built by removing two out of eight sections from data input. The removal of sections allows direct insights on geological uncertainties at inter-well spacings by comparing modelled and described sections. Other quantitative and qualitative comparisons were carried out between models to understand the advantages/disadvantages of each algorithm. Model A is used as the base case. Indicator Kriging (Model D) simplifies the facies distribution by assigning continuous geological bodies of the most abundant lithofacies to each zone. Sequential Indicator Simulation (Model C) is confident to conserve facies proportion when geological heterogeneity is complex. The use of trend with Truncated Gaussian Simulation is a powerful tool for modelling well-defined spatial facies relationships. However, in shallow-water carbonate, facies can coexist and their association can change through time and space. The present study shows that the scale of modelling (depositional environment or lithofacies) involves specific simulation constraints on shallow-water carbonate modelling methods.
Droughts in São Paulo
(2023)
Literature has suggested that droughts and societies are mutually shaped and, therefore, both require a better understanding of their coevolution on risk reduction and water adaptation. Although the Sao Paulo Metropolitan Region drew attention because of the 2013-2015 drought, this was not the first event. This paper revisits this event and the 1985-1986 drought to compare the evolution of drought risk management aspects. Documents and hydrological records are analyzed to evaluate the hazard intensity, preparedness, exposure, vulnerability, responses, and mitigation aspects of both events. Although the hazard intensity and exposure of the latter event were larger than the former one, the policy implementation delay and the dependency of service areas in a single reservoir exposed the region to higher vulnerability. In addition to the structural and non-structural tools implemented just after the events, this work raises the possibility of rainwater reuse for reducing the stress in reservoirs.
The sustainability of agro-bioenergy systems is dependent on many factors, some local or regional in implementation, some others global in nature. This study assessed the effects of often ignored local and regional factors (e.g. alternative agronomic factor options, alternative agricultural production systems, alternative biomass flows, alternative conversion technologies etc. The results from this study suggests that key to enhancing the energy efficiency (and by extension the sustainability) of agro-bioenergy systems is paying attention to local and regional factors such as biomass conversion technology, alternative agronomic factor options, alternative agricultural production systems and available biomass flows.
Thematic cartography
(2001)
India is facing a double burden of malnourishment with co-existences of under- and over-nourishment. Various socioeconomic factors play an essential role in determining dietary choices. Agriculture is one of the major emitters of greenhouse gases (GHGs) in India, contributing 18% of total emissions. It also consumes freshwater and uses land significantly. We identify eleven Indian diets by applying k-means cluster analysis on latest data from the Indian household consumer expenditure survey. The diets vary in calorie intake [2289-3218 kcal/Consumer Unit (CU)/day] and dietary composition. Estimated embodied GHG emissions in the diets range from 1.36 to 3.62 kg CO2eq./CU/day, land footprint from 4 to 5.45 m(2)/CU/day, whereas water footprint varies from 2.13 to 2.97m(3)/CU/day. Indian diets deviate from a healthy reference diet either with too much or too little consumption of certain food groups. Overall, cereals, sugar, and dairy products intake are higher. In contrast, the consumption of fruits and vegetables, pulses, and nuts is lower than recommended. Our study contributes to deriving required polices for the sustainable transformation of food systems in India to eliminate malnourishment and to reduce the environmental implications of the food systems. (c) 2020 Elsevier B.V. All rights reserved.
Agriculture in India accounts for 18% of greenhouse gas (GHG) emissions and uses significant land and water. Various socioeconomic factors and food subsidies influence diets in India. Indian food systems face the challenge of sustainably nourishing the 1.3 billion population. However, existing studies focus on a few food system components, and holistic analysis is still missing. We identify Indian food systems covering six food system components: food consumption, production, processing, policy, environmental footprints, and socioeconomic factors from the latest Indian household consumer expenditure survey. We identify 10 Indian food systems using k-means cluster analysis on 15 food system indicators belonging to the six components. Based on the major source of calorie intake, we classify the ten food systems into production-based (3), subsidy-based (3), and market-based (4) food systems. Home-produced and subsidized food contribute up to 2000 kcal/consumer unit (CU)/day and 1651 kcal/CU/day, respectively, in these food systems. The calorie intake of 2158 to 3530 kcal/CU/day in the food systems reveals issues of malnutrition in India. Environmental footprints are commensurate with calorie intake in the food systems. Embodied GHG, land footprint, and water footprint estimates range from 1.30 to 2.19 kg CO(2)eq/CU/day, 3.89 to 6.04 m(2)/CU/day, and 2.02 to 3.16 m(3)/CU/day, respectively. Our study provides a holistic understanding of Indian food systems for targeted nutritional interventions on household malnutrition in India while also protecting planetary health.
To what extent has the European Union (EU) had a benign or retarding effect on what its member states would have undertaken in the absence of EU climate policies during 2008–2012? A measurement tool for the EU policy’s effect is developed and shows a benign average EU effect with considerable variation across countries. The EU’s policy effectiveness vis-à-vis its member states is explained by the EU’s non-compliance mechanism, the degree of usage of the Kyoto flexible mechanisms, and national pre-Kyoto emission reduction goals. Time-series cross-sectional analyses show that the EU’s non-compliance mechanism has no effect, while the ex-ante plans for using Kyoto flexible mechanisms and/or the ambitious pre-Kyoto emission reduction targets allow member states to escape constraints imposed by EU climate policy.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007–2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash–Sutcliffe efficiency of 0.72 and a Kling–Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007-2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash-Sutcliffe efficiency of 0.72 and a Kling-Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
Developing Critical Thinking
(2012)
Developing critical thinking
(2012)
Relating to students
(2013)
The Alborz range of N Iran provides key information on the spatiotemporal evolution and characteristics of the Arabia-Eurasia continental collision zone. The southwestern Alborz range constitutes a transpressional duplex, which accommodates oblique shortening between Central Iran and the South Caspian Basin. The duplex comprises NW-striking frontal ramps that are kinematically linked to inherited E-W-striking, right-stepping lateral to obliquely oriented ramps. New zircon and apatite (U-Th)/He data provide a high-resolution framework to unravel the evolution of collisional tectonics in this region. Our data record two pulses of fast cooling associated with SW-directed thrusting across the frontal ramps at similar to 18-14 and 9.5-7.5 Ma, resulting in the tectonic repetition of a fossil zircon partial retention zone and a cooling pattern with a half U-shaped geometry. Uniform cooling ages of similar to 7-6 Ma along the southernmost E-W striking oblique ramp and across its associated NW-striking frontal ramps suggests that the ramp was reactivated as a master throughgoing, N-dipping thrust. We interpret this major change in fault kinematics and deformation style to be related to a change in the shortening direction from NE to N/NNE. The reduction in the obliquity of thrusting may indicate the termination of strike-slip faulting (and possibly thrusting) across the Iranian Plateau, which could have been triggered by an increase in elevation. Furthermore, we suggest that similar to 7-6-m.y.-old S-directed thrusting predated inception of the westward motion of the South Caspian Basin. Citation: Ballato, P., D. F. Stockli, M. R. Ghassemi, A. Landgraf, M. R. Strecker, J. Hassanzadeh, A. Friedrich, and S. H. Tabatabaei (2012), Accommodation of transpressional strain in the Arabia-Eurasia collision zone: new constraints from (U-Th)/He thermochronology in the Alborz mountains.
Planetary research is often user-based and requires considerable skill, time, and effort. Unfortunately, self-defined boundary conditions, definitions, and rules are often not documented or not easy to comprehend due to the complexity of research. This makes a comparison to other studies, or an extension of the already existing research, complicated. Comparisons are often distorted, because results rely on different, not well defined, or even unknown boundary conditions. The purpose of this research is to develop a standardized analysis method for planetary surfaces, which is adaptable to several research topics. The method provides a consistent quality of results. This also includes achieving reliable and comparable results and reducing the time and effort of conducting such studies. A standardized analysis method is provided by automated analysis tools that focus on statistical parameters. Specific key parameters and boundary conditions are defined for the tool application. The analysis relies on a database in which all key parameters are stored. These databases can be easily updated and adapted to various research questions. This increases the flexibility, reproducibility, and comparability of the research. However, the quality of the database and reliability of definitions directly influence the results. To ensure a high quality of results, the rules and definitions need to be well defined and based on previously conducted case studies. The tools then produce parameters, which are obtained by defined geostatistical techniques (measurements, calculations, classifications). The idea of an automated statistical analysis is tested to proof benefits but also potential problems of this method. In this study, I adapt automated tools for floor-fractured craters (FFCs) on Mars. These impact craters show a variety of surface features, occurring in different Martian environments, and having different fracturing origins. They provide a complex morphological and geological field of application. 433 FFCs are classified by the analysis tools due to their fracturing process. Spatial data, environmental context, and crater interior data are analyzed to distinguish between the processes involved in floor fracturing. Related geologic processes, such as glacial and fluvial activity, are too similar to be separately classified by the automated tools. Glacial and fluvial fracturing processes are merged together for the classification. The automated tools provide probability values for each origin model. To guarantee the quality and reliability of the results, classification tools need to achieve an origin probability above 50 %. This analysis method shows that 15 % of the FFCs are fractured by intrusive volcanism, 20 % by tectonic activity, and 43 % by water & ice related processes. In total, 75 % of the FFCs are classified to an origin type. This can be explained by a combination of origin models, superposition or erosion of key parameters, or an unknown fracturing model. Those features have to be manually analyzed in detail. Another possibility would be the improvement of key parameters and rules for the classification. This research shows that it is possible to conduct an automated statistical analysis of morphologic and geologic features based on analysis tools. Analysis tools provide additional information to the user and are therefore considered assistance systems.
Floor-Fractured Craters (FFCs) represent an impact crater type, where the infilling is separated by cracks into knobs of different sizes and shapes. This work focuses on the possible processes which form FFCs to understand the relationship between location and geological environment. We generated a global distribution map using new High Resolution Stereo Camera and Context Camera images. Four hundred and twenty-one potential FFCs have been identified on Mars. A strong link exists among floor fracturing, chaotic terrain, outflow channels and the dichotomy boundary. However, FFCs are also found in the Martian highlands. Additionally, two very diverse craters are used as a case study and we compared them regarding appearance of the surface units, chronology and geological processes. Five potential models of floor fracturing are presented and discussed here. The analyses suggest an origin due to volcanic activity, groundwater migration or tensile stresses. Also subsurface ice reservoirs and tectonic activity are taken into account. Furthermore, the origin of fracturing differs according to the location on Mars. (C) 2013 Elsevier Ltd. All rights reserved.
Above and underground hydrological processes depend on soil moisture (SM) variability, driven by different environmental factors that seldom are well-monitored, leading to a misunderstanding of soil water temporal patterns. This study investigated the stability of the SM temporal dynamics to different monitoring temporal resolutions around the border between two soil types in a tropical watershed. Four locations were instrumented in a small-scale watershed (5.84 km(2)) within the tropical coast of Northeast Brazil, encompassing different soil types (Espodossolo Humiluvico or Carbic Podzol, and Argissolo Vermelho-Amarelo or Haplic Acrisol), land covers (Atlantic Forest, bush vegetation, and grassland) and topographies (flat and moderate slope). The SM was monitored at a temporal resolution of one hour along the 2013-2014 hydrological year and then resampled a resolutions of 6 h, 12 h, 1 day, 2 days, 4 days, 7 days, and 15 days. Descriptive statistics, temporal variability, time-stability ranking, and hierarchical clustering revealed uneven associations among SM time components. The results show that the time-invariant component ruled SM temporal variability over the time-varying parcel, either at high or low temporal resolutions. Time-steps longer than 2 days affected the mean statistical metrics of the SM time-variant parcel. Additionally, SM at downstream and upstream sites behaved differently, suggesting that the temporal mean was regulated by steady soil properties (slope, restrictive layer, and soil texture), whereas their temporal anomalies were driven by climate (rainfall) and hydrogeological (groundwater level) factors. Therefore, it is concluded that around the border between tropical soil types, the distinct behaviour of time-variant and time-invariant components of SM time series reflects different combinations of their soil properties.
Studies on the unsustainable use of groundwater resources are still considered incipient since it is frequently a poorly understood and managed, devalued and inadequately protected natural resource. Groundwater Recharge (GWR) is one of the most challenging elements to estimate since it can rarely be measured directly and cannot easily be derived from existing data. To overcome these limitations, many hydro(geo)logists have combined different approaches to estimate large-scale GWR, namely: remote sensing products, such as IMERG product; Water Budget Equation, also in combination with hydrological models, and; Geographic Information System (GIS), using estimation formulas. For intermediary-scale GWR estimation, there exist: Non-invasive Cosmic-Ray Neutron Sensing (CRNS); wireless networks from local soil probes; and soil hydrological models, such as HYDRUS. Accordingly, this PhD thesis aims, on the one hand, to demonstrate a GIS-based model coupling for estimating the GWR distribution on a large scale in tropical wet basins. On the other hand, it aims to use the time series from CRNS and invasive soil moisture probes to inversely calibrate the soil hydraulic properties, and based on this, estimating the intermediary-scale GWR using a soil hydrological model. For such purpose, two tropical wet basins located in a complex sedimentary aquifer in the coastal Northeast region of Brazil were selected. These are the João Pessoa Case Study Area and the Guaraíra Experimental Basin. Several satellite products in the first area were used as input to the GIS-based water budget equation model for estimating the water balance components and GWR in 2016 and 2017. In addition, the point-scale measurement and CRNS data were used in the second area to determine the soil hydraulic properties, and to estimate the GWR in the 2017-2018 and 2018-2019 hydrological years. The resulting values of GWR on large- and intermediary-scale were then compared and validated by the estimates obtained by groundwater table fluctuations. The GWR rates for IMERG- and rain-gauge-based scenarios showed similar coefficients between 68% and 89%, similar mean errors between 30% and 34%, and slightly-different bias between -13% and 11%. The results of GWR rates for soil probes and CRNS soil moisture scenarios ranged from -5.87 to -61.81 cm yr-1, which corresponds to 5% and 38% of the precipitation. The calculations of the mean GWR rates on large-scale, based on remote sensing data, and on intermediary-scale, based on CRNS data, held similar results for the Podzol soil type, namely 17.87% and 17% of the precipitation. It is then concluded that the proposed methodologies allowed for estimating realistically the GWR over the study areas, which can be a ground-breaking step towards improving the water management and decision-making in the Northeast of Brazil.
The Value of Empirical Data for Estimating the Parameters of a Sociohydrological Flood Risk Model
(2019)
In this paper, empirical data are used to estimate the parameters of a sociohydrological flood risk model. The proposed model, which describes the interactions between floods, settlement density, awareness, preparedness, and flood loss, is based on the literature. Data for the case study of Dresden, Germany, over a period of 200years, are used to estimate the model parameters through Bayesian inference. The credibility bounds of their estimates are small, even though the data are rather uncertain. A sensitivity analysis is performed to examine the value of the different data sources in estimating the model parameters. In general, the estimated parameters are less biased when using data at the end of the modeled period. Data about flood awareness are the most important to correctly estimate the parameters of this model and to correctly model the system dynamics. Using more data for other variables cannot compensate for the absence of awareness data. More generally, the absence of data mostly affects the estimation of the parameters that are directly related to the variable for which data are missing. This paper demonstrates that combining sociohydrological modeling and empirical data gives additional insights into the sociohydrological system, such as quantifying the forgetfulness of the society, which would otherwise not be easily achieved by sociohydrological models without data or by standard statistical analysis of empirical data.
Bank filtration is considered to improve water quality through microbially mediated degradation of pollutants and is suitable for waterworks to increase their production. In particular, aquifer temperatures and oxygen supply have a great impact on many microbial processes. To investigate the temporal and spatial behavior of selected organic micropollutants during bank filtration in dependence of relevant biogeochemical conditions, we have set up a 2D reactive transport model using MODFLOW and PHT3D under the user interface ORTI3D. The considered 160-m-long transect ranges from the surface water to a groundwater extraction well of the adjacent waterworks. For this purpose, water levels, temperatures, and chemical parameters were regularly measured in the surface water and groundwater observation wells over one and a half years. To simulate the effect of seasonal temperature variations on microbial mediated degradation, we applied an empirical temperature factor, which yields a strong reduction of the degradation rate at groundwater temperatures below 11 degrees C. Except for acesulfame, the considered organic micropollutants are substantially degraded along their subsurface flow paths with maximum degradation rates in the range of 10(-6) mol L-1 s(-1). Preferential biodegradation of phenazone, diclofenac, and valsartan was found under oxic conditions, whereas carbamazepine and sulfamethoxazole were degraded under anoxic conditions. This study highlights the influence of seasonal variations in oxygen supply and temperature on the fate of organic micropollutants in surface water infiltrating into an aquifer.
The improvement of process representations in hydrological models is often only driven by the modelers' knowledge and data availability. We present a comprehensive comparison between two hydrological models of different complexity that is developed to support (1) the understanding of the differences between model structures and (2) the identification of the observations needed for model assessment and improvement. The comparison is conducted on both space and time and by aggregating the outputs at different spatiotemporal scales. In the present study, mHM, a process‐based hydrological model, and ParFlow‐CLM, an integrated subsurface‐surface hydrological model, are used. The models are applied in a mesoscale catchment in Germany. Both models agree in the simulated river discharge at the outlet and the surface soil moisture dynamics, lending their supports for some model applications (drought monitoring). Different model sensitivities are, however, found when comparing evapotranspiration and soil moisture at different soil depths. The analysis supports the need of observations within the catchment for model assessment, but it indicates that different strategies should be considered for the different variables. Evapotranspiration measurements are needed at daily resolution across several locations, while highly resolved spatially distributed observations with lower temporal frequency are required for soil moisture. Finally, the results show the impact of the shallow groundwater system simulated by ParFlow‐CLM and the need to account for the related soil moisture redistribution. Our comparison strategy can be applied to other models types and environmental conditions to strengthen the dialog between modelers and experimentalists for improving process representations in Earth system models.
Geopolitical shifts and the changing significance of borders in the EU's neighbourhood are usually understood as a matter of international power politics. Factors that accompany geopolitical impact on borders, such as media coverage of geopolitical change, often appear as secondary or irrelevant. However the recent Ukraine conflict revealed the contrary as pro-EU attitudes were strongly supported by 'western' media. Therefore this paper seeks to clarify the role of news media in creating perspectives and attitudes on geopolitical shifts and the significance of European borders. Empirical evidence on the coverage of the evolving Ukraine crisis by German news sources portrays the media as promoters of biased framings and imaginaries which suggest that the EU be a potential conflict party in the newly evolving geostrategic confrontation in its eastern neighbourhood. The findings indicate that during critical periods of the Ukraine crisis media reports combined rising euphoria about Europe and 'the West', as defenders of the 'good cause', with excessive moral polarising and the discursive normalisation of a rhetoric of escalation. Imaginaries of a bipolar world (The West against Russia) and a new Cold War prepared the ground for a new understanding of European borders and neighbourhood relations as being manipulable at will.
Soils play a crucial role in biogeochemical cycles as spatially distributed sources and sinks of nutrients. Any spatial patterns depend on soil forming processes, our understanding of which is still limited, especially in regards to tropical rainforests. The objective of our study was to investigate the effects of landscape properties, with an emphasis on the geometry of the land surface, on the spatial heterogeneity of soil chemical properties, and to test the suitability of soil-landscape modeling as an appropriate technique to predict the spatial variability of exchangeable K and Mg in a humid tropical forest in Panama. We used a design-based, stratified sampling scheme to collect soil samples at 108 sites on Barro Colorado Island, Panama. Stratifying variables are lithology, vegetation and topography. Topographic variables were generated from high-resolution digital elevation models with a grid size of 5 m. We took samples from five depths down to I m, and analyzed for total and exchangeable K and Mg. We used simple explorative data analysis techniques to elucidate the importance of lithology for soil total and exchangeable K and Mg. Classification and Regression Trees (CART) were adopted to investigate importance of topography, lithology and vegetation for the spatial distribution of exchangeable K and Mg and with the intention to develop models that regionalize the point observations using digital terrain data as explanatory variables. Our results suggest that topography and vegetation do not control the spatial distribution of the selected soil chemical properties at a landscape scale and lithology is important to some degree. Exchangeable K is distributed equally across the study area indicating that other than landscape processes, e.g. biogeochemical processes, are responsible for its spatial distribution. Lithology contributes to the spatial variation of exchangeable Mg but controlling variables could not be detected. The spatial variation of soil total K and Mg is mainly influenced by lithology.
Study region:
Ca Mau Province (CMP), Mekong Delta (MD), Vietnam.
Study focus:
Groundwater from deep aquifers is the most reliable source of freshwater in the MD but extensive overexploitation in the last decades led to the drop of hydraulic heads and negative environmental impacts. Therefore, a comprehensive groundwater investigation was conducted to evaluate its composition in the context of Quaternary marine transgression and regression cycles, geochemical processes as well as groundwater extraction.
New hydrological insights for the region:
The abundance of groundwater of Na-HCO3 type and distinct ion ratios, such as Na+/Cl-, indicate extensive freshwater intrusion in an initially saline hydrogeological system, with decreasing intensity from upper Pleistocene to deeper Miocene aquifers, most likely during the last marine regression phase 60-12 ka BP. Deviations from the conservative mixing line between the two endmembers seawater and freshwater are attributed to ion-exchange processes on mineral surfaces, making ion ratios in combination with a customized water type analysis a useful tool to distinguish between salinization and freshening processes. Elevated salinity in some areas is attributed to HCO3- generation by organic matter decomposition in marine sediments rather than to seawater intrusion. Nevertheless, a few randomly distributed locations show strong evidence of recent salinization in an early stage, which may be caused by the downwards migration of saline Holocene groundwater through natural and anthropogenic pathways into deep aquifers.
Deepening understanding
(2013)
1. Key concepts 2. What students should have done 3. What students did 4. Deepening understanding 5. General description of deepening understanding 6. Why is deepening understanding an important stage? 7. How does deepening understanding occur in the lessons and some examples 8. Possible difficulties 9. Conclusion
Exploring elections features from a geographical perspective is the focus of this study. Its primary objective is to develop a scientific approach based on geoinformation technology (GIT) that promotes deeper understanding how geographical settings affect the spatial and temporal variations of voting behaviour and election outcomes. For this purpose, the five parliamentary elections (1991-2005) following the political turnaround in 1990 in the South East European reform country Albania have been selected as a case study. Elections, like other social phenomena that do not develop uniformly over a territory, inherit a spatial dimension. Despite of fact that elections have been researched by various scientific disciplines ranging from political science to geography, studies that incorporate their spatial dimension are still limited in number and approaches. Consequently, the methodologies needed to generate an integrated knowledge on many facets that constitute election features are lacking. This study addresses characteristics and interactions of the essential elements involved in an election process. Thus, the baseline of the approach presented here is the exploration of relations between three entities: electorate (political and sociodemographic features), election process (electoral system and code) and place (environment where voters reside). To express this interaction the concept of electoral pattern is introduced. Electoral patterns are defined by the study as the final view of election results, chiefly in tabular and/or map form, generated by the complex interaction of social, economic, juridical, and spatial features of the electorate, which has occurred at a specific time and in a particular geographical location. GIT methods of geoanalysis and geovisualization are used to investigate the characteristics of electoral patterns in their spatial and temporal distribution. Aggregate-level data modelled in map form were used to analyse and visualize the spatial distribution of election patterns components and relations. The spatial dimension of the study is addressed in the following three main relations: One, the relation between place and electorate and its expression through the social, demographic and economic features of the electorate resulting in the profile of the electorate’s context; second, the electorate-election interaction which forms the baseline to explore the perspective of local contextual effects in voting behaviour and election results; third, the relation between geographical location and election outcomes reflecting the implication of determining constituency boundaries on election results. To address the above relations, three types of variables: geo, independent and dependent, have been elaborated and two models have been created. The Data Model, developed in a GIS environment, facilitates structuring of election data in order to perform spatial analysis. The peculiarity of electoral patterns – a multidimensional array that contains information on three variables, stored in data layers of dissimilar spatial units of reference and scales of value measurement – prohibit spatial analysis based on the original source data. To perform a joint spatial analysis it is therefore mandatory to restructure the spatial units of reference while preserving their semantic content. In this operation, all relevant electoral as well as socio-demographic data referenced to different administrative spatial entities are re-referenced to uniform grid cells as virtual spatial units of reference. Depending on the scale of data acquisition and map presentation, a cell width of 0.5 km has been determined. The resulting fine grid forms the basis of subsequent data analyses and correlations. Conversion of the original vector data layers into target raster layers allows for unification of spatial units, at the same time retaining the existing level of detail of the data (variables, uniform distribution over space). This in turn facilitates the integration of the variables studied and the performance of GIS-based spatial analysis. In addition, conversion to raster format makes it possible to assign new values to the original data, which are based on a common scale eliminating existing differences in scale of measurement. Raster format operations of the type described are well-established data analysis techniques in GIT, yet they have rarely been employed to process and analyse electoral data. The Geovisualization Model, developed in a cartographic environment, complements the Data Model. As an analog graphic model it facilitates efficient communication and exploration of geographical information through cartographic visualization. Based on this model, 52 choropleth maps have been generated. They represent the outcome of the GIS-based electoral data analysis. The analog map form allows for in-depth visual analysis and interpretation of the distribution and correlation of the electoral data studied. For researchers, decision makers and a wider public the maps provide easy-to-access information on and promote easy-to-understand insight into the spatial dimension, regional variation and resulting structures of the electoral patterns defined.
We analysed the interplay between coastal uplift, sea level change in the Black Sea, and incision of the Kizilirmak River in northern Turkey. These processes have created multiple co-genetic fluvial and marine terrace sequences that serve as excellent strain markers to assess the ongoing evolution of the Pontide orogenic wedge and the growth of the northern margin of the Central Anatolian Plateau. We used high-resolution topographic data, OSL ages, and published information on past sea levels to analyse the spatiotemporal evolution of these terraces; we derived a regional uplift model for the northward advancing orogenic wedge that supports the notion of laterally variable uplift rates along the flanks of the Pontides. The best-fit uplift model defines a constant long-term uplift rate of 0.28 +/- 0.07 m/ka for the last 545 ka. This model explains the evolution of the terrace sequence in light of active tectonic processes and superposed cycles of climate-controlled sea-level change. Our new data reveal regional uplift characteristics that are comparable to the inner sectors of the Central Pontides; accordingly, the rate of uplift diminishes with increasing distance from the main strand of the restraining bend of the North Anatolian Fault Zone (NAFZ). This spatial relationship between the regional impact of the restraining bend of the NAFZ and uplift of the Pontide wedge thus suggests a strong link between the activity of the NAFZ, deformation and uplift in the Pontide orogenic wedge, and the sustained lateral growth of the Central Anatolian Plateau flank. (C) 2018 Elsevier Ltd. All rights reserved.
The efficiency of sediment routing from land to the ocean depends on the position of submarine canyon heads with regard to terrestrial sediment sources. We aim to identify the main controls on whether a submarine canyon head remains connected to terrestrial sediment input during Holocene sea-level rise. Globally, we identified 798 canyon heads that are currently located at the 120m-depth contour (the Last Glacial Maximum shoreline) and 183 canyon heads that are connected to the shore (within a distance of 6 km) during the present-day highstand. Regional hotspots of shore-connected canyons are the Mediterranean active margin and the Pacific coast of Central and South America. We used 34 terrestrial and marine predictor variables to predict shore-connected canyon occurrence using Bayesian regression. Our analysis shows that steep and narrow shelves facilitate canyon-head connectivity to the shore. Moreover, shore-connected canyons occur preferentially along active margins characterized by resistant bedrock and high river-water discharge.
Movement ecology aims to provide common terminology and an integrative framework of movement research across all groups of organisms. Yet such work has focused on unitary organisms so far, and thus the important group of filamentous fungi has not been considered in this context. With the exception of spore dispersal, movement in filamentous fungi has not been integrated into the movement ecology field. At the same time, the field of fungal ecology has been advancing research on topics like informed growth, mycelial translocations, or fungal highways using its own terminology and frameworks, overlooking the theoretical developments within movement ecology. We provide a conceptual and terminological framework for interdisciplinary collaboration between these two disciplines, and show how both can benefit from closer links: We show how placing the knowledge from fungal biology and ecology into the framework of movement ecology can inspire both theoretical and empirical developments, eventually leading towards a better understanding of fungal ecology and community assembly. Conversely, by a greater focus on movement specificities of filamentous fungi, movement ecology stands to benefit from the challenge to evolve its concepts and terminology towards even greater universality. We show how our concept can be applied for other modular organisms (such as clonal plants and slime molds), and how this can lead towards comparative studies with the relationship between organismal movement and ecosystems in the focus.
Submerged sequences of marine terraces potentially provide crucial information of past sea-level positions. However, the distribution and characteristics of drowned marine terrace sequences are poorly known at a global scale. Using bathymetric data and novel mapping and modeling techniques, we studied a submerged sequence of marine terraces in the Bay of Biscay with the objective to identify the distribution and morphologies of submerged marine terraces and the timing and conditions that allowed their formation and preservation. To accomplish the objectives a high-resolution bathymetry (5 m) was analyzed using Geographic Information Systems and TerraceM(R). The successive submerged terraces were identified using a Surface Classification Model, which linearly combines the slope and the roughness of the surface to extract fossil sea-cliffs and fossil rocky shore platforms. For that purpose, contour and hillshaded maps were also analyzed. Then, shoreline angles, a geomorphic marker located at the intersection between the fossil sea-cliff and platform, were mapped analyzing swath profiles perpendicular to the isobaths. Most of the submerged strandlines are irregularly preserved throughout the continental shelf. In summary, 12 submerged terraces with their shoreline angles between approximately: -13 m (T1), -30 and -32 m (T2), -34 and 41 m (T3), -44 and -47 m (T4), -49 and 53 m (T5), -55 and 58 m (T6), -59 and 62 m (T7), -65 and 67 m (T8), -68 and 70 m (T9), -74 and -77 m (T10), -83 and -86 m (T11) and -89 and 92 m (T12). Nevertheless, the ones showing the best lateral continuity and preservation in the central part of the shelf are T3, T4, T5, T7, T8, and T10. The age of the terraces has been estimated using a landscape evolution model. To simulate the formation and preservation of submerged terraces three different scenarios: (i) 20-0 ka; (ii) 128-0 ka; and (iii) 128-20 ka, were compared. The best scenario for terrace generation was between 128 and 20 Ka, where T3, T5, and T7 could have been formed.
Submerged sequences of marine terraces potentially provide crucial information of past sea-level positions. However, the distribution and characteristics of drowned marine terrace sequences are poorly known at a global scale. Using bathymetric data and novel mapping and modeling techniques, we studied a submerged sequence of marine terraces in the Bay of Biscay with the objective to identify the distribution and morphologies of submerged marine terraces and the timing and conditions that allowed their formation and preservation. To accomplish the objectives a high-resolution bathymetry (5 m) was analyzed using Geographic Information Systems and TerraceM(R). The successive submerged terraces were identified using a Surface Classification Model, which linearly combines the slope and the roughness of the surface to extract fossil sea-cliffs and fossil rocky shore platforms. For that purpose, contour and hillshaded maps were also analyzed. Then, shoreline angles, a geomorphic marker located at the intersection between the fossil sea-cliff and platform, were mapped analyzing swath profiles perpendicular to the isobaths. Most of the submerged strandlines are irregularly preserved throughout the continental shelf. In summary, 12 submerged terraces with their shoreline angles between approximately: -13 m (T1), -30 and -32 m (T2), -34 and 41 m (T3), -44 and -47 m (T4), -49 and 53 m (T5), -55 and 58 m (T6), -59 and 62 m (T7), -65 and 67 m (T8), -68 and 70 m (T9), -74 and -77 m (T10), -83 and -86 m (T11) and -89 and 92 m (T12). Nevertheless, the ones showing the best lateral continuity and preservation in the central part of the shelf are T3, T4, T5, T7, T8, and T10. The age of the terraces has been estimated using a landscape evolution model. To simulate the formation and preservation of submerged terraces three different scenarios: (i) 20-0 ka; (ii) 128-0 ka; and (iii) 128-20 ka, were compared. The best scenario for terrace generation was between 128 and 20 Ka, where T3, T5, and T7 could have been formed.
In this study, we analyzed 10 yrs of seismicity in central Italy from 2008 to 2017, a period witnessing more than 1400 earthquakes in the magnitude range 2.5≤Mw≤6.5. The data set includes the main sequences that have occurred in the area, including those associated with the 2009 Mw 6.3 L'Aquila earthquake and the 2016–2017 sequence (Mw 6.2 Amatrice, Mw 6.1 Visso, and Mw 6.5 Norcia earthquakes). We calibrated a local magnitude scale, investigating the impact of changing the reference distance at which the nonparametric attenuation is tied to the zero‐magnitude attenuation function for southern California. We also developed an attenuation model to compute the radiated seismic energy (Es) from the time integral of the squared ground‐motion velocity. Seismic moment (M0) and stress drop (Δσ) were estimated for each earthquake by fitting a ω‐square model to the source spectra obtained by applying a nonparametric spectral inversion. The Δσ‐values vary over three orders of magnitude from about 0.1 to 10 MPa, the larger values associated with the mainshocks. The Δσ‐values describe a lognormal distribution with mean and standard deviation equal to log(Δσ)=(−0.25±0.45) (i.e., the mean Δσ is 0.57 MPa, with a 95% confidence interval from 0.08 to 4.79 MPa). The Δσ variability introduces a spread in the distribution of seismic energy versus moment, with differences in energy up two orders of magnitudes for earthquakes with the same moment. The variability in the high‐frequency spectral levels is captured by the local magnitude (ML), which scales with radiated energy as ML=(−1.59+0.52logEs) for logEs≤10.26 and ML=(−1.38+0.50logEs) otherwise. As the peak ground velocity increases with increasing Δσ, local and energy magnitudes perform better than moment magnitude as predictors for the shaking potential. The availability of different magnitude scales and source parameters for a large earthquake population will help characterize the between‐event ground‐motion variability in central Italy.
Relating to students
(2012)
1. The Assignment 'Devotion to Religion and acitive Citizenship' 2. The Assignment 'How are religious spread across Europe' 3. The Assignment 'Is football as important as religion?' 4. The Assignment 'Why be religious?' 5. The Assignment 'Lucky charms' 6. The Assignment 'No Creo en el Jamas' (Life after death) 7. The Assignment 'Religion and its influence on politics ans policies' 8. The Assignment 'Secularisation in Europe' 9. The Assignment 'The meaning of religious places' 10. The Assignment 'Unity in diversity' 11. Which conceptions did you find?
Streamflow dynamics in mountainous environments are controlled by runoff generation processes in the basin upstream. Runoff generation processes are thus a major control of the terrestrial part of the water cycle, influencing both, water quality and water quantity as well as their dynamics. The understanding of these processes becomes especially important for the prediction of floods, erosion, and dangerous mass movements, in particular as hydrological systems often show threshold behavior. In case of extensive environmental changes, be it in climate or in landuse, the understanding of runoff generation processes will allow us to better anticipate the consequences and can thus lead to a more responsible management of resources as well as risks. In this study the runoff generation processes in a small undisturbed catchment in the Chilean Andes were investigated. The research area is characterized by steep hillslopes, volcanic ash soils, undisturbed old growth forest and high rainfall amounts. The investigation of runoff generation processes in this data scarce area is of special interest as a) little is known on the hydrological functioning of the young volcanic ash soils, which are characterized by extremely high porosities and hydraulic conductivities, b) no process studies have been carried out in this area at either slope or catchment scale, and c) understanding the hydrological processes in undisturbed catchments will provide a basis to improve our understanding of disturbed systems, the shift in processes that followed the disturbance and maybe also future process evolution necessary for the achievement of a new steady state. The here studied catchment has thus the potential to serve as a reference catchment for future investigations. As no long term data of rainfall and runoff exists, it was necessary to replace long time series of data with a multitude of experimental methods, using the so called "multi-method approach". These methods cover as many aspects of runoff generation as possible and include not only the measurement of time series such as discharge, rainfall, soil water dynamics and groundwater dynamics, but also various short term measurements and experiments such as determination of throughfall amounts and variability, water chemistry, soil physical parameters, soil mineralogy, geo-electrical soundings and tracer techniques. Assembling the results like pieces of a puzzle produces a maybe not complete but nevertheless useful picture of the dynamic ensemble of runoff generation processes in this catchment. The employed methods were then evaluated for their usefulness vs. expenditures (labour and financial costs). Finally, the hypotheses - the perceptual model of runoff generation generated from the experimental findings - were tested with the physically based model Catflow. Additionally the process-based model Wasim-ETH was used to investigate the influence of landuse on runoff generation at the catchment scale. An initial assessment of hydrologic response of the catchment was achieved with a linear statistical model for the prediction of event runoff coefficients. The parameters identified as best predictors give a first indication of important processes. Various results acquired with the "multi-method approach" show that response to rainfall is generally fast. Preferential vertical flow is of major importance and is reinforced by hydrophobicity during the summer months. Rapid lateral water transport is necessary to produce the fast response signal, however, while lateral subsurface flow was observed at several soil moisture profiles, the location and type of structures causing fast lateral flow on the hillslope scale is still not clear and needs to be investigated in more detail. Surface runoff has not been observed and is unlikely due to the high hydraulic conductivities of the volcanic ash soils. Additionally, a large subsurface storage retains most of the incident rainfall amount during events (>90%, often even >95%) and produces streamflow even after several weeks of drought. Several findings suggest a shift in processes from summer to winter causing changes in flow patterns, changes in response of stream chemistry to rainfall events and also in groundwater-surface water interactions. The results of the modelling study confirm the importance of rapid and preferential flow processes. However, due to the limited knowledge on subsurface structures the model still does not fully capture runoff response. Investigating the importance of landuse on runoff generation showed that while peak runoff generally increased with deforested area, the location of these areas also had an effect. Overall, the "multi-method approach" of replacing long time series with a multitude of experimental methods was successful in the identification of dominant hydrological processes and thus proved its applicability for data scarce catchments under the constraint of limited resources.
Throughfall, that is, the fraction of rainfall that passes through the forest canopy, is strongly influenced by rainfall and forest stand characteristics which are in turn both subject to seasonal dynamics. Disentangling the complex interplay of these controls is challenging, and only possible with long-term monitoring and a large number of throughfall events measured in parallel at different forest stands. We therefore based our analysis on 346 rainfall events across six different forest stands at the long-term terrestrial environmental observatory TERENO Northeast Germany. These forest stands included pure stands of beech, pine and young pine, and mixed stands of oak-beech, pine-beech and pine-oak-beech. Throughfall was overall relatively low, with 54-68% of incident rainfall in summer. Based on the large number of events it was possible to not only investigate mean or cumulative throughfall but also its statistical distribution. The distributions of throughfall fractions show distinct differences between the three types of forest stands (deciduous, mixed and pine). The distributions of the deciduous stands have a pronounced peak at low throughfall fractions and a secondary peak at high fractions in summer, as well as a pronounced peak at higher throughfall fractions in winter. Interestingly, the mixed stands behave like deciduous stands in summer and like pine stands in winter: their summer distributions are similar to the deciduous stands but the winter peak at high throughfall fractions is much less pronounced. The seasonal comparison further revealed that the wooden components and the leaves behaved differently in their throughfall response to incident rainfall, especially at higher rainfall intensities. These results are of interest for estimating forest water budgets and in the context of hydrological and land surface modelling where poor simulation of throughfall would adversely impact estimates of evaporative recycling and water availability for vegetation and runoff.
One of the mechanisms for sudden particle release is a decrease in groundwater salt concentration to below the critical salt concentration (CSC), where repulsion forces between fine particles and matrix surfaces exceed binding forces. In this paper, an attempt was made to determine the CSC with both batch and column experiments. Two types of sediments were tested: (a) homogeneous quartz sand and (b) mineralogically heterogeneous sediment, taken from the Hanford formation in southeast Washington. Stepwise decreasing concentrations of NaNO3 solution were applied until fine particles were released from the sediments and the CSC was determined. Two methods were used to minimize the interference of particle release due to physical forces (shear stress) in the batch experiments: (a) postexperimental correction for mechanical effects, and (b) minimization of shear stress on the sediments during the experiment. CSCs from batch experiments were compared to those obtained from column experiments. It was found that both the amount of particles released and the CSC were an order of magnitude higher for the Hanford sediment than for the Sand. Moreover, particle detachment above the CSC was observed for the Hanford sediment. This suggests that the concept of sharp CSCs could be problematic in natural heterogeneous sediments where fine particles may mobilize at salt concentrations significantly above the CSC, thus unexpectedly enhancing colloid-facilitated transport of contaminants. (C) 2004 Elsevier B.V. All rights reserved
Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and binary indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeterscale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a datascarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.
Geoecolocigal problems in the use of morphostructural features in the young moraine area SW Berlin
(1995)
Preface
(2001)
Preface
(2006)
The GRACE-FO satellites launched in May 2018 are able to quantify the water mass deficit in Central Europe during the two consecutive summer droughts of 2018 and 2019. Relative to the long-term climatology, the water mass deficits were-112 +/- 10.5 Gt in 2018 and-145 +/- 12 Gt in 2019. These deficits are 73% and 94% of the mean amplitude of seasonal water storage variations, which is so severe that a recovery cannot be expected within 1 year. The water deficits in 2018 and 2019 are the largest in the whole GRACE and GRACE-FO time span. Globally, the data do not show an offset between the two missions, which proves the successful continuation of GRACE by GRACE-FO and thus the reliability of the observed extreme events in Central Europe. This allows for a joint assessment of the four Central European droughts in 2003, 2015, 2018, and 2019 in terms of total water storage deficits.
The objective is to compare the time scale of microbial degradation of the herbicide Isoproturon at the end of earthworm burrows with the time scale of microbial degradation in the surrounding soil matrix. To this end, we developed a method which allows the observation of microbial degradation on Isoproturon in macropores under field conditions. Study area was the well-investigated Weiherbach catchment (Kraichgau, SW Germany). The topsoil of a 12 m(2) large plot parcel was removed, the parcel was covered with a tent and instrumented with TDR and temperature sensors at two depths. After preliminary investigations to optimize application and sampling techniques, the bottom of 55 earthworm burrows, located at a depth of 80-100cm, was inoculated with Isoproturon. Within an interval of 8 d, soil material from the bottom of 5-6 earthworm burrows was taken into the laboratory and analyzed for the Isoproturon concentration for investigation of the degradation kinetics. Furthermore, the degradation of Isoproturon in the soil matrix, that surrounded the macropores at the field plot, was observed in the laboratory. Microbial degradation of Isoproturon at the bottom of the earthworm burrows was with a DT-50-value of 15.6 d almost as fast as in the topsoil. In the soil matrix that closely surrounded the center of the earthworm burrows, no measurable degradation was observed within 30 d. The clearly slower degradation in the soil matrix may be likely explained by a lower microbial activity that was observed in the surrounding soil matrix. The results give evidence that deterministic modeling of the fate of pesticides once transported into heterogeneous subsoils by preferential flow requires an accuracy of a few centimeters in terms of predicting spatial locations: time scales of microbial degradation in the subsoil drop almost one order of magnitude, in case the herbicides dislocates from the bottom of an earthworm burrow a few centimeter into the surrounding soil matrix. If at all, predictions of such an accuracy can only be achieved at locations at sites where the soil hydraulic properties and the macropore system are known at a very high spatial resolution
One of the major challenges related with the current practice in seismic hazard studies is the adjustment of empirical ground motion prediction equations (GMPEs) to different seismological environments. We believe that the key to accommodating differences in regional seismological attributes of a ground motion model lies in the Fourier spectrum. In the present study, we attempt to explore a new approach for the development of response spectral GMPEs, which is fully consistent with linear system theory when it comes to adjustment issues. This approach consists of developing empirical prediction equations for Fourier spectra and for a particular duration estimate of ground motion which is tuned to optimize the fit between response spectra obtained through the random vibration theory framework and the classical way. The presented analysis for the development of GMPEs is performed on the recently compiled reference database for seismic ground motion in Europe (RESORCE-2012). Although, the main motivation for the presented approach is the adjustability and the use of the corresponding model to generate data driven host-to-target conversions, even as a standalone response spectral model it compares reasonably well with the GMPEs of Ambraseys et al. (Bull Earthq Eng 3:1-53, 2005), Akkar and Bommer (Seismol Res Lett 81(2):195-206, 2010) and Akkar and Cagnan (Bull Seismol Soc Am 100(6):2978-2995, 2010).
Earth observation data have become an outstanding basis for analyzing environmental
aspects. The increasing availability of remote sensing data is accompanied
by an increasing user demand. Within the scope of the COOPERNICUS-initiative,
the automatic processing of remote sensing data is important for supplying value-
added-information products. The use of additional data like land-water-masks
in the context of deriving value-added information products can stabilize and
improve the product quality of information products.
The authors of this contribution would like to discuss different automated
processing algorithms which are based on land-water masks for value-added
data interpretation. These developments were supported or accompanied by Prof.
Hartmut Asche.
In this study, 17 hydrologists with different experience in hydrological modelling applied the same conceptual catchment model (HBV) to a Greek catchment, using identical data and model code. Calibration was performed manually. Subsequently, the modellers were asked for their experience, their calibration strategy, and whether they enjoyed the exercise. The exercise revealed that there is considerable modellers’ uncertainty even among the experienced modellers. It seemed to be equally important whether the modellers followed a good calibration strategy, and whether they enjoyed modelling. The exercise confirmed previous studies about the benefit of model ensembles: Different combinations of the simulation results (median, mean) outperformed the individual model simulations, while filtering the simulations even improved the quality of the model ensembles. Modellers’ experience, decisions, and attitude, therefore, have an impact on the hydrological model application and should be considered as part of hydrological modelling uncertainty.
Asian climate patterns, characterised by highly seasonal monsoons and continentality, are thought to originate in the Eocene epoch (56 to 34 million years ago - Ma) in response to global climate, Tibetan Plateau uplift and the disappearance of the giant Proto-Paratethys sea formerly extending over Eurasia. The influence of this sea on Asian climate has hitherto not been constrained by proxy records despite being recognised as a major driver by climate models. We report here strongly seasonal records preserved in annual lamina of Eocene oysters from the Proto-Paratethys with sedimentological and numerical data showing that monsoons were not dampened by the sea and that aridification was modulated by westerly moisture sourced from the sea. Hot and arid summers despite the presence of the sea suggest a strong anticyclonic zone at Central Asian latitudes and an orographic effect from the emerging Tibetan Plateau. Westerly moisture precipitating during cold and wetter winters appear to have decreased in two steps. First in response to the late Eocene (34-37 Ma) sea retreat; second by the orogeny of the Tian Shan and Pamir ranges shielding the westerlies after 25 Ma. Paleogene sea retreat and Neogene westerly shielding thus provide two successive mechanisms forcing coeval Asian desertification and biotic crises.
From waste to resource
(2019)
Reservoir networks have been established worldwide to ensure water supply, but water availability is endangered quantitatively and qualitatively by sedimentation. Reuse of sediment silted in reservoirs as fertilizer has been proposed, thus transforming nutrient-enriched sediments from waste into resource. The aim of this study is to assess the potential of reusing sediment as a nutrient source for agriculture a semiarid basin in Brazil. where 1029 reservoirs were identified. Sedimentation was modelled for the entire reservoir network, accounting for 7 x 10(5) tons of y(-1)sediment deposition. Nutrients contents in reservoir sediments was analysed and com- pared to nutrients contents of agricultural soils in the catchment. The potential of reusing sediment as fertilizer was assessed for maize crops (Zea mays L) and the sediment mass required to fertilize the soil was computed considering that the crop nitrogen requirement would be fully provided by the sediment. Economic feasibility was analysed by comparing the costs of the proposed practice to those obtained if the area was fertilized by traditional means. Results showed that, where reservoirs fall dry frequently and sediments can be removed by excavation, soil fertilization with sediment presents lower costs than those observed for application of commercial chemical fertilizers. Compared to conventional fertilization, when using sediments with high nutrient content, 25% of costs could be saved, while when using sediments with low nutrient content costs are 9% higher. According to the local conditions, sediments with nitrogen content above 1.5 g kg(-1) are cost efficient as nitrogen source. However, physical and chemical analyses are recommended to define the sediment mass to be used and to identify any constraint to the application of the practice, like the high sodium adsorption ratio observed in one of the studied reservoirs, which can contribute to soil salinization. (C) 2019 Elsevier B.V. All rights reserved.
Remote Sensing technologies allow to map biophysical, biochemical, and earth surface parameters of the land surface. Of especial interest for various applications in environmental and urban sciences is the combination of spectral and 3D elevation information. However, those two data streams are provided separately by different instruments, namely airborne laser scanner (ALS) for elevation and a hyperspectral imager (HSI) for high spectral resolution data. The fusion of ALS and HSI data can thus lead to a single data entity consistently featuring rich structural and spectral information. In this study, we present the application of fusing the first pulse return information from ALS data at a sub-decimeter spatial resolution with the lower-spatial resolution hyperspectral information available from the HSI into a hyperspectral point cloud (HSPC). During the processing, a plausible hyperspectral spectrum is assigned to every first-return ALS point. We show that the complementary implementation of spectral and 3D information at the point-cloud scale improves object-based classification and information extraction schemes. This improvements have great potential for numerous land cover mapping and environmental applications.
The ability to reflect is considered an essential element of Education for Sustainable Development (ESD) and a key competence for learners and educators in ESD (UNECE Strategy for ESD, 2012). In contrast to its high importance, little is known about how reflective thinking can be identified, influenced or increased in the classroom. Therefore, the objective of this study is to address this need by developing an empirical multi-stage model designed to help educators diagnose different levels of reflective thinking and to identify factors that influence students’ reflective thinking about sustainability. Based on a 4–8-week project with grade 10 and 11 students studying sustainability, reflective thinking performance using weblogs as reflective journals was analysed. In addition, qualitative semi-structured interviews were conducted with the teachers to comprehend the learning environment and the personal value they assigned to ESD in their geography class. To determine the levels of reflective thinking achieved by the students, the study built on the work of Dewey (1933) and pre-existing multi-stage models of reflective thinking (Bain, Ballantyne, & Packer, 1999; Chen, Wei, Wu, & Uden, 2009). Using a qualitative, iterative data analysis, the study adapted the stage models to be applicable in ESD and found great differences in the students’ reflection levels. Furthermore, the study identified eight factors that influence students’ reflective thinking about sustainability. The outcomes of this study may be valuable for educators in high school and higher education, who seek to diagnose their students’ reflective thinking performance and facilitate reflection about sustainability.
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions.
The 2015 magnitude 7.8 Gorkha earthquake and its aftershocks weakened mountain slopes in Nepal. Co- and postseismic landsliding and the formation of landslide-dammed lakes along steeply dissected valleys were widespread, among them a landslide that dammed the Kali Gandaki River. Overtopping of the landslide dam resulted in a flash flood downstream, though casualties were prevented because of timely evacuation of low-lying areas. We hindcast the flood using the BREACH physically based dam-break model for upstream hydrograph generation, and compared the resulting maximum flow rate with those resulting from various empirical formulas and a simplified hydrograph based on published observations. Subsequent modeling of downstream flood propagation was compromised by a coarse-resolution digital elevation model with several artifacts. Thus, we used a digital-elevation-model preprocessing technique that combined carving and smoothing to derive topographic data. We then applied the 1-dimensional HEC-RAS model for downstream flood routing, and compared it to the 2-dimensional Delft-FLOW model. Simulations were validated using rectified frames of a video recorded by a resident during the flood in the village of Beni, allowing estimation of maximum flow depth and speed. Results show that hydrological smoothing is necessary when using coarse topographic data (such as SRTM or ASTER), as using raw topography underestimates flow depth and speed and overestimates flood wave arrival lag time. Results also show that the 2-dimensional model produces more accurate results than the 1-dimensional model but the 1-dimensional model generates a more conservative result and can be run in a much shorter time. Therefore, a 2-dimensional model is recommended for hazard assessment and planning, whereas a 1-dimensional model would facilitate real-time warning declaration.
In recent years, urban and rural flash floods in Europe and abroad have gained considerable attention because of their sudden occurrence, severe material damages and even danger to life of inhabitants. This contribution addresses questions about possibly changing environmental conditions which might have altered the occurrence frequencies of such events and their consequences. We analyze the following major fields of environmental changes.
Altered high intensity rain storm conditions, as a consequence of regionalwarming; Possibly altered runoff generation conditions in response to high intensity rainfall events; Possibly altered runoff concentration conditions in response to the usage and management of the landscape, such as agricultural, forest practices or rural roads; Effects of engineering measures in the catchment, such as retention basins, check dams, culverts, or river and geomorphological engineering measures.
We take the flash-flood in Braunsbach, SW-Germany, as an example, where a particularly concise flash flood event occurred at the end of May 2016. This extreme cascading natural event led to immense damage in this particular village. The event is retrospectively analyzed with regard to meteorology, hydrology, geomorphology and damage to obtain a quantitative assessment of the processes and their development.
The results show that it was a very rare rainfall event with extreme intensities, which in combination with catchment properties and altered environmental conditions led to extreme runoff, extreme debris flow and immense damages. Due to the complex and interacting processes, no single flood cause can be identified, since only the interplay of those led to such an event. We have shown that environmental changes are important, but-at least for this case study-even natural weather and hydrologic conditions would still have resulted in an extreme flash flood event.
Advances in Flood Research
(2002)
The EVE curriculum framework
(2012)
The EVE curriculum framework
(2013)
Insights into the dynamics of human behavior in response to flooding are urgently needed for the development of effective integrated flood risk management strategies, and for integrating human behavior in flood risk modeling. However, our understanding of the dynamics of risk perceptions, attitudes, individual recovery processes, as well as adaptive (i.e., risk reducing) intention and behavior are currently limited because of the predominant use of cross-sectional surveys in the flood risk domain. Here, we present the results from one of the first panel surveys in the flood risk domain covering a relatively long period of time (i.e., four years after a damaging event), three survey waves, and a wide range of topics relevant to the role of citizens in integrated flood risk management. The panel data, consisting of 227 individuals affected by the 2013 flood in Germany, were analyzed using repeated-measures ANOVA and latent class growth analysis (LCGA) to utilize the unique temporal dimension of the data set. Results show that attitudes, such as the respondents' perceived responsibility within flood risk management, remain fairly stable over time. Changes are observed partly for risk perceptions and mainly for individual recovery and intentions to undertake risk-reducing measures. LCGA reveal heterogeneous recovery and adaptation trajectories that need to be taken into account in policies supporting individual recovery and stimulating societal preparedness. More panel studies in the flood risk domain are needed to gain better insights into the dynamics of individual recovery, risk-reducing behavior, and associated risk and protective factors.
For effective disaster risk management and adaptation planning, a good understanding of current and projected flood risk is required. Recent advances in quantifying flood risk at the regional and global scale have largely neglected critical infrastructure, or addressed this important sector with insufficient detail. Here, we present the first European-wide assessment of current and future flood risk to railway tracks for different global warming scenarios using an infrastructure-specific damage model. We find that the present risk, measured as expected annual damage, to railway networks in Europe is approx. (sic)581 million per year, with the highest risk relative to the length of the network in North Macedonia, Croatia, Norway, Portugal, and Germany. Based on an ensemble of climate projections for RCP8.5, we show that current risk to railway networks is projected to increase by 255% under a 1.5 degrees C, by 281% under a 2 degrees C, and by 310% under a 3 degrees C warming scenario. The largest increases in risk under a 3 degrees C scenario are projected for Slovakia, Austria, Slovenia, and Belgium. Our advances in the projection of flood risk to railway infrastructure are important given their criticality, and because losses to public infrastructure are usually not insured or even uninsurable in the private market. To cover the risk increase due to climate change, European member states would need to increase expenditure in transport by (sic)1.22 billion annually under a 3 degrees C warming scenario without further adaptation. Limiting global warming to the 1.5 degrees C goal of the Paris Agreement would result in avoided losses of (sic)317 million annually.
The number of people exposed to natural hazards has grown steadily over recent decades, mainly due to increasing exposure in hazard-prone areas. In the future, climate change could further enhance this trend. Still, empirical and comprehensive insights into individual recovery from natural hazards are largely lacking, hampering efforts to increase societal resilience. Drawing from a sample of 710 residents affected by flooding across Germany in June 2013, we empirically explore a wide range of variables possibly influencing self-reported recovery, including flood-event characteristics, the circumstances of the recovery process, socio-economic characteristics, and psychological factors, using multivariate statistics. We found that the amount of damage and other flood-event characteristics such as inundation depth are less important than socio-economic characteristics (e.g., sex or health status) and psychological factors (e.g., risk aversion and emotions). Our results indicate that uniform recovery efforts focusing on areas that were the most affected in terms of physical damage are insufficient to account for the heterogeneity in individual recovery results. To increase societal resilience, aid and recovery efforts should better address the long-term psychological effects of floods.
Participatory design (PD) in HCI has been successfully applied to vulnerable groups, but further research is still needed on forced migrants. We report on a month-long case study with a group of about 25 young forced migrants (YFMs), where we applied and adapted strategies from PD and participatory research (PR). We gained insights into the benefits and drawbacks of combining PD and PR concepts in this particular scenario. The PD+PR approach supported intercultural collaborations between YFMs and young members of the host community. It also enabled communication across language barriers by using visual and “didactic reduction” resources. On a theoretical level, the experiences we gained allowed us to reflect on the role of “safe spaces” for participation and the need for further discussing it in PD. Our results can benefit researchers who take part in technology-related participatory processes with YFMs.
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
A conundrum of trends
(2022)
This comment is meant to reiterate two warnings: One applies to the uncritical use of ready-made (openly available) program packages, and one to the estimation of trends in serially correlated time series. Both warnings apply to the recent publication of Lischeid et al. about lake-level trends in Germany.